- Run on separate servers
- Interact through the network
- Scale independently
- Oftentimes, seperate teams develop these as standalone products.
But making systems distributed has its costs. One now needs to handle network delays, failures of separate nodes, bandwidth limitations, and more.
Fortunately, there are also approaches, addressing these issues, so that your system can remain stable. Or, if a particular service does have a problem, let it “fail fast” and then have it fixed by your development team before the users start complaining.
How to Make Your Microservices More Stable
Microservices architecture as a whole has its pros and cons, which are beyond the scope of this post.
So let us just assume you’ve decided that this particular architecture suits you, and you want to use it in your application. What can you do to make microservices more stable?
There are several approaches that you can use to make sure your system is reliable and fast.
Request Isolation via Threads
This means that each microservice call will be performed in a separate thread, and later the result of its execution will come back to the main calling thread for further processing.
Pros: More control over a microservice execution. For example, you can stop a microservice thread after a certain timeout, e.g., in case the underlying network call takes too long. Or you can run parallel execution of different microservices requests. Doing so allows you to speed up your application performance and decrease the response delay.
You can also use isolated thread pools so that even an excessive load upon one microservice will not cause a lack of threads for others.
Cons. Additional threads will consume a certain amount of memory and require more processor resources to manage their execution. It will inevitably slow down the system. But usually this drawback is outweighed by the increased reliability and performance gain provided by concurrency.
If the microservice call takes too long, you can return a response to a client immediately with a default or cached value, and stop waiting for the actual response by the microservice. You can configure timeouts of different duration for each microservice depending on its importance and expected delay.
Pros. Enhanced user experience due to elimination of the need to wait for a response.
Cons. Data returned to a user will not be accurate, so you need to consider if the speed of the response is indeed more important than its accuracy.
A Circuit Breaker Design Pattern
This design pattern allows you to stop calling a microservice at all if it fails too often, and instead return some default response to a user immediately. It does not mean that a microservice will never be called again – you may also configure a time delay upon which the system will try calling it next time to see if the microservice is back to normal, and if so then resume using it. Such an approach will lower the pressure in your system caused by the unhealthy microservice.
Pros. Reduces response delays for the user, since the system does not wait for a timeout during each call to the broken microservice.
Cons. Data accuracy may suffer severely because when a circuit breaker is activated the microservice is not called at all.
You may want to gather several requests on a microservice into a package and deliver them all at once, rather than perform resource-consuming network calls every time.
Pros Less network load and less threads utilization in your application.
Cons. Usually the system waits for a certain amount of time to gather a package of requests, or just limits the amount of data to be retrieved inside such a package. Therefore, some requests will have to wait until a package is collected, which means you’ll need to balance off network load and response speed.
If you send the same request on a microservice several times, why not save its result and reuse it later? Fortunately, existing frameworks allow us to configure caching relatively easily.
Pros. A microservice response is consistent throughout the user’s requests, and no time gets wasted on unnecessary network calls.
Cons If the results are to be used by the same user then data accuracy won’t usually suffer. But the caching approach can become problematic if applied to requests for multiple users. Additionally, you will need more memory to hold cached values.
Let us graphically summarize both pros and cons of each approach mentioned above, assuming that you use a default static response if a microservice becomes unavailable.
As you can see, the main purpose of the suggested approaches is to make the environment as stable as possible, so that the end user will not notice outages of a particular microservice or increased delays in the network and will receive the responses quickly.
Describing it from a different angle: it will ensure that a faulty system fails fast and relatively painlessly. This is an advantage but it doesn’t come for free. Usually the main sacrifice is data accuracy.
So when building your system with the use of microservices, you should ask yourself what is more important for your particular situation: to always receive up-to-date data, or to be able to receive software feedback quickly and without those “Please come back later” annoyances?
In many practical cases, the latter is more justifiable. As a quick example: let’s suppose you rely on a microservice to retrieve prices for a non-binding comparison by a user of your Website. If this microservice becomes unavailable, it can ruin the user experience. In this situation, you may be better off ignoring the very latest price updates and provide the cached response back to the user in a smooth presentation, at a small risk that some of the shown prices may be slightly outdated.
All approaches outlined above can be either implemented by the development team from scratch, or used as a part of third party libraries, like Hystrix, Akka Toolkit, or Apache Commons.
Need more advice on microservices development or on refactoring your existing monolithic application? We are always up for a chat. Contact Edvantis to schedule a call!