API Gateway vs. Istio Service Mesh
Learn the difference between the traditional API gateway and Istio service mesh. Find out which one you need for cloud-native applications and app modernization.
Join the DZone community and get the full member experience.
Join For FreeArchitects, DevOps, and cloud engineers are gradually trying to understand which is better to continue the journey with: the API gateway, or adopt an entirely new service mesh technology? In this article, we will try to understand the difference between the two capabilities and lay out some reasons for the software team to consider or not consider a service mesh such as Istio (because it is the most widely used service mesh).
Please note we are heavily driven by the concept of MASA architecture that guarantees infrastructure agility, flexibility, and innovation for software development and delivery teams. (MASA — a mesh architecture of apps, APIs, and services — provides technical application professionals delivering applications with the optimal architecture to meet those needs.)
Also, keep in mind that we might be critical of the API gateway due to our futuristic stance. But our evaluation will be genuine and helpful for enterprises looking to introduce radical transformation.
Let us start by analyzing current trends in app modernization.
Trends of App Modernization
- Microservices: Nearly every mid to large company uses microservices to deliver services to the end customer. Microservices (over monoliths) are suitable for developing smaller apps and releasing them to the market.
- Cloud: This needs no explanation. One can only imagine providing services by being on the cloud. Due to regulations and policies and added data-security, Companies also adopt a hybrid cloud- a mix of public/private cloud and on-prem VMs
- Containers (Kubernetes): Containers are used to deploy microservices, and their adoption is rising. Gartner predicts that 15% of workloads worldwide will run in containers by 2026. And most enterprises use managed containers in the cloud for their workloads.
And the adoption of these technologies will be more and more in the coming years (Dynatrace predicts the adoption of containers and cloud almost 127% y-o-y).
The implication of all of the above technology trends is that transaction of data over the network has increased. The question is, will the API gateway and load balancers at the center of the app modernization be sufficient for the future?
Before we find the answer, let us look at some of the implementations of API gateway.
Sample Scenarios of API Gateway and Load Balancer Implementation
After discussing with many clients, we understood various ways API gateways and load balancers are configured. Let us quickly see a few scenarios and understand the limitations of each of them.
All the microservices, cloud, and container journeys start with an API gateway. Take a practical example of two microservices — author and echo server services — hosted on the EKS cluster. (In reality, things can be more complicated; we will see that in another example later.) Suppose architects or the DevOps team want to expose the two services in the private cluster to the public (through DNS name: http://abc.com). In that case, one of the ways is to apply network load balancers for each of these services (assuming each of them is hosted in the same cluster but different nodes). And an API gateway such as AWS API gateway can be configured to allow the traffic to these node balancers.
If the cluster is in private VPC, then VPC Link needs to be introduced to receive the traffic from the API gateway to inside the private subnet, which can be further allowed to network load balancers. And each node balancer will redirect the traffic to their respective services.
The downside of such an architecture is there can be multiple load balancers, and costs can go high. Thus, some architects may use another implementation by a single load balancer with numerous ports opened to serve various services ( in our case, author and echo server). Refer to Fig B:
We have looked into simple use cases, but practically, there can be many microservices hosted in multicloud and container services, which can look something like the below:
Limitations of API Gateways
Traffic Handling Limited to Edge
An API gateway is good enough for taking the traffic at the edge, but if you want to manage the between the services (like in Fig D), it can soon become complicated.
Sometimes the same API gateway can be used to handle the communication between two services. This is done primarily for peer-to-peer connections. However, two services residing in the same network but using external IP addresses to communicate creates unnecessary hops, and such designs should be avoided. The data takes longer, and communication uses more bandwidth.
This is called U-turn NAT and NAT loopback (network hair pinning). And such designs should be completely avoided.
Inability to Handle North-South Traffic
When the services are in different clusters or data centers (public cloud and on-prem VMs), enabling communication between all the services using an API gateway is tricky. A workaround can be enabling multiple API gateways used in a federated way, but the implementation can be complicated, and the project’s cost will outweigh the benefits.
Lack of Visibility into Internal Communication
Let us consider an example where the API gateway allows the request to service A. Service A needs to talk to Service B and C to respond to the request. If there is an error in Service B, Service A will not be able to respond. And it is difficult to detect the fault with the help of an API gateway.
No Control Over Network Inside the Cluster
When DevOps and cloud teams want to control the internal communication or create network policies between microservices and API gateway cannot be used for such scenarios.
East-West Traffic Is Not Secured
Since API gateway usage is limited to the edge, the traffic inside a cluster (say EC2 or EKS cluster) will not be secured. If a person hacks one service into a cluster, he can quickly take control of all other services (known as a vector attack). Architects can use workarounds such as implementing certificates and configuring TLS, etc. But again, this is an additional project which can consume a lot of time of your DevOps folks.
Cannot Implement Progressive Delivery
API gateways are not designed in a way to understand the application subsets in Kubernetes. For example, Service A has two versions — V1 and V2 — running simultaneously (with the former being the older and the later being canary deployment); in such case, an API gateway cannot be implemented for implementing canary deployments. The bottom line is you cannot extend the API gateway to implement progressive delivery such as canary, blue-green, etc.
To overcome all the limitations of API gateways, your DevOps can develop workarounds (such as multiple API gateways implemented in a federated manner), but note that the maintenance of such a setup could be more scalable and would be regarded as technical debt.
Hence, a service mesh infrastructure should be considered. Open-source Istio, developed by Google and IBM, is a widely used service mesh software. Many advanced organizations, such as Airbnb, Splunk, Amazon.com, Salesforce, etc., have used Istio to gain agility and security in their network.
Introducing Istio Service Mesh
Istio is an open-source service mesh software that abstracts the network and security from the fleet of microservices.
From implementation point-of-view, Istio injects Envoy proxy (a sidecar) into each service and handles the L4 and L7 traffic. DevOps and cloud teams can now easily define network and security policies from the central plane.
Since the application, transport, and network traffic (L7/l5/L4) can be controlled by Istio, it is easy to manage and secure both the north-south and east-west traffic. You can apply fine-grained network and security policies to east-west and north-south traffic in multi-cloud and hybrid cloud applications. The best part is Istio provides central observability of network performance in a single plane.
DevOps team can use Istio to implement canary or blue-green deployment strategies using CI/CD tools.
Read more about the features here: Istio service mesh.
Tabular Comparison of API Gateway and Istio Service Mesh
Please follow the comparison of the API gateway and Istio service mesh across a few dimensions, such as network management, security management, observability, and extensibility.
Conclusion
The API gateway was good for load balancing and handling other network management at the edge. Still, as you adopt microservices, cloud, and container technologies to attain scale, architects need a bit of re-imagination for network agility and security. Istio service mesh is compelling because it is open source and is heavily contributed to by Google, IBM, and Red Hat. And there are various architectural scenarios for integrating an API gateway and Istio for app modernization.
Published at DZone with permission of Debasree Panda. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments