Scaling Microservices: The Challenges and Solutions
Microservices are sought after for their versatility, but scalability provides a challenge. Let's discuss solutions to scaling while maintaining performance.
Join the DZone community and get the full member experience.
Join For FreeThe increasingly popular microservices model — by which software solutions are comprised of multiple modular components — is a very powerful and increasingly popular one. Not only does it allow for solutions to be formulated from the best individual components available, it also allows for the most effective services to be redeployed in different combinations. In this way, microservices represent a much more versatile solution to many common computing challenges.
But while microservices can be combined in an endless array of combinations in order to tackle a variety of problems, the concept and architecture present a series of unique challenges when it comes to scalability. That’s not to say that microservices are not scalable, they are considered more scalable than their preceding monolithic legacy applications, but they do require a very different approach to scaling.
The Scalability Problem
At the heart of these challenges lies something of a paradox. The versatility of microservices is one of their strongest characteristics, but that versatility comes at a price.
Aside from their individual, modular nature, microservices are particularly versatile solutions because the individual components can be run across a variety of platforms. In fact, it is quite common for a microservice-based solution to consist of multiple different virtualizations and servers working together. This represents a drastically different architecture to legacy IT solutions, and as such, it demands that we think about scalability differently.
When you are running a single monolithic application from a single server, you can handle an increase in demand by allocating more resources to the application. Alternatively, if the increase in demand is the result of more individual users making use of an application, you can begin running new instances of it and spread the load more evenly.
With microservices, on the other hand, upscaling can involve handling a number of different components and services. This means that either all the components need to upscale at the same time, or you need a means of identifying which individual components to upscale, and a method of ensuring that they can still integrate with the rest of the system.
Measuring Performance
Of course, as with all services, regardless of what is going on in the backend of your system, it is the end-user experience that matters the most. If the customer experiences a decrease in speed or reliability, then whatever solution you have implemented has, in their eyes, failed in its role. You don’t have to work in the industry for very long to understand that the end-user cares little for your displays of technical finesse. All that matters to them is their overall experience, as determined by the performance of the service they are using.
Considering the problem of scalability from this perspective encourages you to look beyond the back-end server-side situation and to focus your attention on the experience of the end-user. A common mistake that is made when upscaling microservices is to only consider the back-end; implementing solutions because they should work, not because they have been shown to work in a real-world situation. An invaluable tool for ensuring the best possible end-user experience is an application delivery controller (ADC), which allows you to detect any possible performance issues. ADCs are inherently useful because they are Layer 7 load balancers with extra features which facilitate the automation of scaling. From both the perspective of the service operator and the end-user, an automated solution is the best approach to take.
However, you can’t just use any old ADC. Traditionally, ADCs are designed to handle an application that is running on a single platform located in a single location. This type of ADC won’t be much help with a microservice framework that is spread across multiple platforms and systems. Fortunately, there are more modern ADC systems that are suitable for microservices, known as ‘microservice-aware’ ADCs. This type of ADC allows you to monitor the performance of your service and optimize its performance, regardless of its location, or the platform it’s running on.
Maintaining Performance
With an effective monitoring system in place and an understanding of what your goals should be with regards to performance and efficiency, you can then begin to formulate a strategy for maintaining optimal performance. While monitoring your performance will give you valuable and critical insight into how well your app or service is operating, you also need to know how to interpret those results.
There are a number of influencing factors that can affect the performance of microservices when upscaling. Scalability and performance are two different metrics, yet they are intimately entwined with one another. There are two key aspects to the design of a microservice that will determine its performance when scaling and these are concurrency and partitioning.
Concurrency refers to the process by which each individual task is divided into smaller pieces. Partitioning, meanwhile, will determine how efficiently these smaller pieces can be processed in parallel to one another. Scalability is determined by how efficiently tasks are divided and broken down, while performance is a measure of how efficiently the system is able to process these tasks.
A popular and successful microservice system can expect a steady rise in traffic, and therefore resource demands, over time. In order to scale successfully, each individual microservice needs to scale both individually, and as part of a larger system. Doing so requires that the dependencies of each microsystem also scale with it.
Resource Allocation
In considering the scalability of a microservice system, it is useful to consider hardware properties as resources. This includes things like the CPU specifications, data storage, RAM, and other similar properties. All of these things are finite and represent the physical properties and capabilities of the system. The biggest challenge for the service provider, at least in terms of resource allocation, is prioritizing appropriately.
In order to appropriately prioritize a system’s resources, you will need to understand which microservices are the most essential. If your microservice system is handling requests from businesses and individuals, it is generally the businesses that you will want to give priority to. This might mean ensuring that certain microservices which are used by your business users but not by individual users, are given priority. Or you could approach it by giving service priority to users connecting from certain IP addresses.
Microservices require a very different approach to monolithic systems when scaling. When scaling microservices, you need to consider both the individual components and the system as a whole. However, with an understanding of the particulars of microservice scaling, you stand a very good chance of succeeding.
Published at DZone with permission of Stefan Thorpe, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments