The Rise of Service Mesh Architecture
In this article, we give an introductory look into the world of service mesh architecture and its utility in microservices.
Join the DZone community and get the full member experience.
Join For FreeIn this post, I will explore the concept of a service mesh, why it is needed for your cloud native applications, the reason for its popularity, and the incredible growth/adoption within the community.
Microservices have taken the software industry by storm and rightly so. Transitioning from a monolith to a microservices architecture enables you to deploy your application more frequently, independently, and reliably.
However, everything is not green in a Microservice architecture, and it has to deal with the same problems encountered while designing distributed systems.
On this note, why not recap the Eight Fallacies of Distributed Computing:
- The network is reliable
- Latency is zero
- Bandwidth is infinite
- The network is secure
- Topology does not Change
- There is one Administrator
- Transport cost is zero
- The network is homogenous
With a microservices architecture, a dependency on a network comes in and raises reliability questions. As the number of services increase, you have to deal with the interactions between them, monitor the overall system health, be fault tolerant, have logging and telemetry in place, handle multiple points of failure, and more. Each of the services needs to have these common functionalities in place so that the service to service communication is smooth and reliable. But this sounds like lot of effort if you have to deal with dozens of microservices, does it not?
What Is a Service Mesh?
A service mesh can be defined as an infrastructure layer which handles the inter-service communication in a microservice architecture. Service mesh reduces the complexity associated with a microservice architecture and provides lot of the functionalities like:
- Load balancing
- Service discovery
- Health checks
- Authentication
- Traffic management and routing
- Circuit breaking and failover policy
- Security
- Metrics and telemetry
- Fault injection
Why Is Service Mesh Necessary?
In a microservice architecture, handling service to service communication is challenging and most of the time we depend upon third-party libraries or components to provide functionalities like service discovery, load balancing, circuit breaker, metrics, telemetry, and more. Companies like Netflix came up with their own libraries like Hystrix for circuit breakers, Eureka for service discovery, Ribbon for load balancing which are popular and widely used by organizations.
However, these components need to be configured inside your application code and based on the language you are using, the implementation will vary a bit. Anytime these external components are upgraded, you need to update your application, verify it, and deploy the changes. This also creates an issue where now your application code is a mixture of business functionalities and these additional configurations. Needless to say, this tight coupling increases the overall application complexity, since the developer now needs to also understand how these components are configured so that he/she can troubleshoot in case of any issues.
Service Mesh comes to the rescue here. It decouples this complexity from your application and puts it in a service proxy and lets it handle it for you. These service proxies can provide you with a bunch of functionalities like traffic management, circuit breaking, service discovery, authentication, monitoring, security, and much more. Hence from an application standpoint, all it contains is the implementation of business functionalities.
Say, in your microservice architecture, if you have five services talking with each other. Then, instead of building the common necessary functionalities like configuration, routing, telemetry, logging, circuit breaking, etc. inside every microservice, it makes more sense to abstract it into a separate component — called a 'service proxy.'
With the introduction of service mesh, there is a clear segregation of responsibilities. This makes the lives of developers easier. If there is an issue, developers can easily identify the root cause based on whether it is application or network related.
How Is Service Mesh Implemented?
To implement a service mesh, you can deploy a proxy alongside your services. This is also known as the sidecar pattern.
The sidecars abstract the complexity away from the application and handle the functionalities like service discovery, traffic management, load balancing, circuit breaking, etc.
Envoy from Lyft is the most popular open source proxy designed for cloud native applications. Envoy runs along side every service and provides the necessary features in a platform agnostic manner. All traffic to your service flows through the Envoy proxy.
What Is Istio?
Istio is an open platform to connect, manage, and secure microservices. It is very popular in the Kubernetes community and is getting widely adopted.
Istio provides additional capabilities in your microservices architecture like intelligent routing, load balancing, service discovery, policy enforcement, in-depth telemetry, circuit breaking and retry functionalities, logging, monitoring, and more.
Istio is one of the best implementations of a service mesh at this point. It enables you to deploy microservices without an in-depth knowledge of the underlying infrastructure.
As more and more organizations start breaking down their monoliths into a microservice architecture, they will reach a point where managing the increasing number of services becomes a burden. Service mesh comes to the rescue in such scenarios and abstracts away all the complexities without the need to make any changes to the application.
Published at DZone with permission of Samir Behara, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments