Microservices are a new trend in software development. To reduce the infrastructure cost and increase horizontal scalability, systems become more distributed: they now often consist of several independent parts. These parts run on separate servers, interact through the network and scale independently, and are oftentimes developed by separate teams as standalone products. But making systems distributed has its costs: one now needs to handle network delays, failures of separate nodes, bandwidth limitations, and more. Fortunately, there are also approaches that can address these issues to help the distributed system remain stable (or, if a particular service does have a problem, let it “fail fast” and get fixed by your development team before the users of the system complain).

Microservices architecture as a whole has its pros and cons, which are beyond the scope of this post. Let us just assume you have decided that this particular architecture suits you, and you are committed to using it in your application. What can you do to make it more stable? There are several approaches that you can use to make sure your system is reliable and fast.

  • Request isolation via threads. This means that each microservice call will be performed in a separate thread, and later the result of its execution will come back to the main calling thread for further processing.
    • Pros. More control over a microservice execution. For example, a microservice thread can be stopped after a certain timeout, e.g., in case the underlying network call takes too long. Or it can be performed in parallel with other microservice requests, allowing you to speed up your application performance and decrease the response delay. You can also use isolated thread pools so that even an excessive load upon one microservice will not cause a lack of threads for others.
    • Cons. Additional threads will consume a certain amount of memory and require more processor resources to manage their execution. It will inevitably slow down the system, but usually this drawback is outweighed by the increased reliability and performance gain provided by concurrency.
  • Timeouts. If the microservice call takes too long, you can return a response to a client immediately with a default or cached value, and stop waiting for the actual response by the microservice. Timeouts of different duration may be configured for each microservice depending on its importance and expected delay.
    • Pros. Enhanced user experience due to elimination of the need to wait for a response.
    • Cons. Data returned to a user will not be accurate, so you need to consider if the speed of the response is indeed more important than its accuracy.
  • A circuit breaker design pattern. This design pattern allows you to stop calling a microservice at all if it fails too often, and instead return some default response to a user immediately. It does not mean that a microservice will never be called again – you may also configure a time delay upon which the system will try calling it next time to see if the microservice is back to normal, and if so then resume using it. Such an approach will lower the pressure in your system caused by the unhealthy microservice.
    • Pros. Reduces response delays for the user, since the system does not wait for a timeout during each call to the broken microservice.
    • Cons. Data accuracy may suffer severely because when a circuit breaker is activated the microservice is not called at all.
  • Request collapsing. You may want to gather several requests on a microservice into a package and deliver them all at once, rather than perform resource-consuming network calls every time.
    • Pros. Less network load and less threads utilization in your application.
    • Cons. Usually the system waits for a certain amount of time to gather a package of requests, or just limits the amount of data to be retrieved inside such a package. Therefore, some requests will have to wait until a package is collected, which means you’ll need to balance off network load and response speed.
  • Request caching. If you send the same request on a microservice several times, why not save its result and reuse it later? Fortunately, existing frameworks allow us to configure caching relatively easily.
    • Pros. A microservice response is consistent throughout the user’s requests, and time is not wasted on unnecessary network calls.
    • Cons. If the results are to be used by the same user then data accuracy won’t usually suffer. But the caching approach can become problematic if applied to requests for multiple users. Additionally, more memory will be required to hold cached values.

Let us graphically summarize both pros and cons of each approach mentioned above, assuming that a default static response is used if a microservice becomes unavailable.

As you can see, the main purpose of the suggested approaches is to make the environment as stable as possible, so that the end user will not notice outages of a particular microservice or increased delays in the network and will receive the responses quickly. Describing it from a different angle: it will ensure that a faulty system fails fast and relatively painlessly. This is an advantage but it doesn’t come for free. Usually the main sacrifice is data accuracy. So when building your system with the use of microservices, you should ask yourself what is more important for your particular situation: to always receive up-to-date data, or to be able to receive software feedback quickly and without those “Please come back later” annoyances. In many practical cases, the latter is more justifiable. As a quick example: let’s suppose you rely on a microservice to retrieve prices for a non-binding comparison by a user of your Website. If this microservice becomes unavailable, it can ruin the user experience. In this situation, you may be better off ignoring the very latest price updates and provide the cached response back to the user in a smooth presentation, at a small risk that some of the shown prices may be slightly outdated.

All approaches outlined above can be either implemented by the development team from scratch, or used as a part of third party libraries, like Hystrix, Akka Toolkit, or Apache Commons.

Need more advice on microservices development or on refactoring your existing monolithic application? We are always up for a chat. Contact Edvantis to schdule a call.