What Is Envoy Proxy?
Envoy proxy is critical to manage and secure cloud-native and Kubernetes apps. Learn why Envoy proxy is required and its architecture, features, and benefits.
Join the DZone community and get the full member experience.
Join For FreeThe article will cover the following topics:
- Why is Envoy proxy required?
- Introducing Envoy proxy
- Envoy proxy architecture with Istio
- Envoy proxy features
- Use cases of Envoy proxy
- Benefits of Envoy proxy
- Demo video - Deploying Envoy in K8s and configuring as a load balancer
Why Is Envoy Proxy Required?
Challenges are plenty for organizations moving their applications from monolithic to microservices architecture. Managing and monitoring the sheer number of distributed services across Kubernetes and the public cloud often exhausts app developers, cloud teams, and SREs. Below are some of the major network-level operational hassles of microservices, which shows why Envoy proxy is required.
Lack of Secure Network Connection
Kubernetes is not inherently secure because services are allowed to talk to each other freely. It poses a great threat to the infrastructure since an attacker who gains access to a pod can move laterally across the network and compromise other services. This can be a huge problem for security teams, as it is harder to ensure the safety and integrity of sensitive data. Also, the traditional perimeter-based firewall approach and intrusion detection systems will not help in such cases.
Complying With Security Policies Is a Huge Challenge
There is no developer on earth who would enjoy writing security logic to ensure authentication and authorization, instead of brainstorming business problems. However, organizations who want to adhere to policies such as HIPAA or GDPR, ask their developers to write security logic such as mTLS encryption in their applications. Such cases in enterprises will lead to two consequences: frustrated developers, and security policies being implemented locally and in silos.
Lack of Visibility Due to Complex Network Topology
Typically, microservices are distributed across multiple Kubernetes clusters and cloud providers. Communication between these services within and across cluster boundaries will contribute to a complex network topology in no time. As a result, it becomes hard for Ops teams and SREs to have visibility over the network, which impedes their ability to identify and resolve network issues in a timely manner. This will lead to frequent application downtime and compromised SLA.
Complicated Service Discovery
Services are often created and destroyed in a dynamic microservices environment. Static configurations provided by old-generation proxies are ineffective in keeping track of services in such an environment. This makes it difficult for application engineers to configure communication logic between services because they have to manually update the configuration file whenever a new service is deployed or deleted. It leads to application developers spending more of their time configuring the networking logic rather than coding the business logic.
Inefficient Load Balancing and Traffic Routing
It is crucial for platform architects and cloud engineers to ensure effective traffic routing and load balancing between services. However, it is a time-consuming and error-prone process for them to manually configure routing rules and load balancing policies for each service, especially when they have a fleet of them. Also, traditional load balancers with simple algorithms would result in inefficient resource utilization and suboptimal load balancing in the case of microservices. All these lead to increased latency, and service unavailability due to improper traffic routing.
With the rise in the adoption of microservices architecture, there was a need for a fast, intelligent proxy that can handle the complex service-to-service connection across the cloud.
Introducing Envoy Proxy
Envoy is an open-source edge and service proxy, originally developed by Lyft to facilitate their migration from a monolith to cloud-native microservices architecture. It also serves as a communication bus for microservices (refer to Figure 1 below) across the cloud, enabling them to communicate with each other in a rapid, secure, and efficient manner.
Envoy proxy abstracts network and security from the application layer to an infrastructure layer. This helps application developers simplify developing cloud-native applications by saving hours spent on configuring network and security logic.
Envoy proxy provides advanced load balancing and traffic routing capabilities that are critical to run large, complex distributed applications. Also, the modular architecture of Envoy helps cloud and platform engineers to customize and extend its capabilities.
Figure 1: Envoy proxy intercepting traffic between services
Envoy Proxy Architecture With Istio
Envoy proxies are deployed as sidecar containers alongside application containers. The sidecar proxy then intercepts and takes care of the service-to-service connection (refer to Figure 2 below) and provides a variety of features. This network of proxies is called a data plane, and it is configured and monitored from a control plane provided by Istio. These two components together form the Istio service mesh architecture, which provides a powerful and flexible infrastructure layer for managing and securing microservices.
Figure 2: Istio sidecar architecture with Envoy proxy data plane
Envoy Proxy Features
Envoy proxy offers the following features at a high level. (Visit Envoy docs for more information on the features listed below.)
- Out-of-process architecture: Envoy proxy runs independently as a separate process apart from the application process. It can be deployed as a sidecar proxy and also as a gateway without requiring any changes to the application. Envoy is also compatible with any application language like Java or C++, which provides greater flexibility for application developers.
- L3/L4 and L7 filter architecture: Envoy supports filters and allows customizing traffic at the network layer (L3/L4) and at the application layer (L7). This allows for more control over the network traffic and offers granular traffic management capabilities such as TLS client certificate authentication, buffering, rate limiting, and routing/forwarding.
- HTTP/2 and HTTP/3 support: Envoy supports HTTP/1.1, HTTP/2, and HTTP/3 (currently in alpha) protocols. This enables seamless communication between clients and target servers using different versions of HTTP.
- HTTP L7 routing: Envoy's HTTP L7 routing subsystem can route and redirect requests based on various criteria, such as path, authority, and content type. This feature is useful for building front/edge proxies and service-to-service meshes.
- gRPC support: Envoy supports gRPC, a Google RPC framework that uses HTTP/2 or above as its underlying transport. Envoy can act as a routing and load-balancing substrate for gRPC requests and responses.
- Service discovery and dynamic configuration: Envoy supports service discovery and dynamic configuration through a layered set of APIs that provide dynamic updates about backend hosts, clusters, routing, listening sockets, and cryptographic material. This allows for centralized management and simpler deployment, with options for DNS resolution or static config files.
- Health checking: For building an Envoy mesh, service discovery is treated as an eventually consistent process. Envoy has a health-checking subsystem that can perform active and passive health checks to determine healthy load-balancing targets.
- Advanced load balancing: Envoy's self-contained proxy architecture allows it to implement advanced load-balancing techniques, such as automatic retries, circuit breaking, request shadowing, and outlier detection, in one place, accessible to any application.
- Front/edge proxy support: Using the same software at the edge provides benefits such as observability, management, and identical service discovery and load-balancing algorithms. Envoy's feature set makes it well-suited as an edge proxy for most modern web application use cases, including TLS termination, support for multiple HTTP versions, and HTTP L7 routing.
- Best-in-class observability: Envoy provides robust statistics support for all subsystems and supports distributed tracing via third-party providers, making it easier for SREs and Ops teams to monitor and debug problems occurring at both the network and application levels.
Given its powerful set of features, Envoy proxy has become a popular choice for organizations to manage and secure multicloud and multicluster apps. In practice, it has two main use cases.
Use Cases of Envoy Proxy
Envoy proxy can be used as both a sidecar service proxy and a gateway.
Envoy Sidecar Proxy
As we have seen in the Isito architecture, Envoy proxy constitutes the data plane and manages the traffic flow between services deployed in the mesh. The sidecar proxy provides features such as service discovery, load balancing, traffic routing, etc., and offers visibility and security to the network of microservices.
Envoy Gateway as API
Envoy proxy can be deployed as an API Gateway and as an ingress (please refer to the Envoy Gateway project). Envoy Gateway is deployed at the edge of the cluster to manage external traffic flowing into the cluster and between multicloud applications (north-south traffic). Envoy Gateway helped application developers who were toiling to configure Envoy proxy (Istio-native) as API and ingress controller, instead of purchasing a third-party solution like NGINX. With its implementation, they have a central location to configure and manage ingress and egress traffic and apply security policies such as authentication and access control.
Below is a diagram of Envoy Gateway architecture and its components.
Envoy Gateway architecture (Source)
Benefits of Envoy Proxy
Envoy’s ability to abstract network and security layers offers several benefits for IT teams such as developers, SREs, cloud engineers, and platform teams. Following are a few of them.
Effective Network Abstraction
The out-of-process architecture of Envoy helps it to abstract the network layer from the application to its own infrastructure layer. This allows for faster deployment for application developers, while also providing a central plane to manage communication between services.
Fine-Grained Traffic Management
With its support for the network (L3/L4) and application (L7) layers, Envoy provides flexible and granular traffic routing, such as traffic splitting, retry policies, and load balancing.
Ensure Zero Trust Security at L4/L7 Layers
Envoy proxy helps to implement authentication among services inside a cluster with stronger identity verification mechanisms like mTLS and JWT. You can achieve authorization at the L7 layer with Envoy proxy easily and ensure zero trust. (You can implement AuthN/Z policies with Istio service mesh — the control plane for Envoy.)
Control East-West and North-South Traffic for Multicloud Apps
Since enterprises deploy their applications into multiple clouds, it is important to understand and control the traffic or communication in and out of the data centers. Since Envoy proxy can be used as a sidecar and also an API gateway, it can help manage east-west traffic and also north-south traffic, respectively.
Monitor Traffic and Ensure Optimum Platform Performance
Envoy aims to make the network understandable by emitting statistics, which are divided into three categories: downstream statistics for incoming requests, upstream statistics for outgoing requests, and server statistics for describing the Envoy server instance. Envoy also provides logs and metrics that provide insights into traffic flow between services, which is also helpful for SREs and Ops teams to quickly detect and resolve any performance issues.
Video: Get Started With Envoy Proxy
Deploying Envoy in k8s and Configuring as Load Balancer
The below video discusses different deployment types and their use cases, and it shows a demo of Envoy deployment into Kubernetes and how to set it as a load balancer (edge proxy).
Published at DZone with permission of Anas T. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments