Advanced Kubernetes Deployment Strategies
This article reviews concepts in Kubernetes deployment, as well as delves into various advanced Kubernetes deployment strategies, pros and cons, and use cases.
Join the DZone community and get the full member experience.
Join For FreeThis is an article from DZone's 2021 Kubernetes and the Enterprise Trend Report.
For more:
Read the Report
In the modern technology landscape, Kubernetes is a widely adopted platform that enables organizations to deploy and manage applications at scale. The container orchestration platform simplifies infrastructure provisioning for microservice-based applications, which empowers efficient workload management through modularity. Kubernetes supports various deployment resources to help implement CI/CD pipelines using updates and versioning. While Kubernetes offers rolling updates as the default deployment strategy, several use cases require a non-conventional approach to deploying or updating cluster services.
This article reviews concepts in Kubernetes deployment, as well as delves into various advanced Kubernetes deployment strategies, pros and cons, and use cases.
Kubernetes Deployment Concepts
Kubernetes uses deployment resources to update applications declaratively. With deployments, cluster administrators define an application’s lifecycle and how related updates should be performed. Kubernetes deployments offer an automated way to achieve and maintain the desired state for cluster objects and applications. The Kubernetes back end manages the deployment process without manual intervention, offering a safe and repeatable way of performing application updates.
Kubernetes deployments allow cluster administrators to:
- Deploy a pod or replica set
- Update replica sets and pods
- Rollback to earlier versions
- Pause/continue deployments
- Scale deployments
The following section explores how Kubernetes simplifies the update process for containerized applications, and how it solves the challenges of continuous delivery.
Kubernetes Objects
While Kubernetes leverages a number of workload resource objects as persistent entities to manage the cluster state, the Kubernetes API uses the Deployment, ReplicaSet, StatefulSet, and DaemonSet resources for declarative updates to an application.
Deployment
Deployment is a Kubernetes resource used to define and identify the application’s desired state. A cluster administrator describes the desired state in the deployment’s YAML file, which is used by the deployment controller to gradually change the actual state to the desired state. To ensure high availability, the deployment controller also constantly monitors and replaces failed cluster nodes and pods with healthy ones.
ReplicaSet
A ReplicaSet is used to maintain a specific number of pods, ensuring high availability. The ReplicaSet's manifest file includes fields for:
- A selector to identify the pods that belong to the set
- The number of replicas, which shows how many pods should be in the set
- A pod template to show what data the new pods should create to meet the ReplicaSet's criteria
StatefulSet
The StatefulSet object manages the deployment and scaling of pods in a stateful application. This resource manages the pods based on identical container specifications and then ensures appropriate ordering and uniqueness for a set of pods. The StatefulSet’s persistent pod identifiers enable cluster administrators to connect their workloads to persistent storage volumes with guaranteed availability.
DaemonSet
DaemonSets help to maintain application deployments by ensuring that a group of nodes runs a copy of a pod. A DaemonSet resource is mostly used to manage the deployment and lifecycle of various agents such as:
- Cluster storage agents on every node
- Log collection daemons
- Node monitoring daemons
Details on the list of various Kubernetes workload resources can also be found here.
Updating With Deployments
Kubernetes Deployments offer a predictable approach to starting and stopping pods. These resources make it easy to deploy, roll back changes, and manage the software release cycle iteratively and autonomously. Kubernetes offers various deployment strategies to enable smaller, more frequent updates as they offer benefits such as:
- Faster customer feedback for better feature optimization
- Reduced time to market
- Improved productivity in DevOps teams
By default, Kubernetes offers rolling updates as the standard deployment strategy, which involves replacing one pod at a time with a new version to avoid cluster downtime. Besides this, depending on the goal and type of features, Kubernetes also supports various advanced deployment strategies — these include blue-green, canary, and A/B deployments.
Let us take a closer look at what these strategies offer and how they differ from each other.
Advanced Strategies for Kubernetes Deployments
Kubernetes offers multiple ways to release application updates and features depending on the use case and workloads involved. In live production environments, it is crucial to use deployment configurations in conjunction with routing features so that updates impact specific versions. This enables release teams to test for the effectiveness of updated features in live environments before committing full versions. Kubernetes supports advanced deployment strategies so that developers can precisely control the flow of traffic toward particular versions.
Blue-Green Deployment
In the blue-green strategy, both the old and new instances of the application are deployed simultaneously. Users have access to the existing version (blue), while the new version (green) is available to site reliability engineering (SRE) and QA teams with an equal number of instances. Once QA teams have verified that the green version passes all the testing requirements, users are redirected to the new version. This is achieved by updating the version
label in the selector
field of the load balancing service.
Blue-green deployment is mostly applicable when developers want to avoid versioning issues.
Using the Blue-Green Deployment Strategy
Let us assume the first version of the application is v1.0.0
while the second version available is v2.0.0
.
Below is the service pointing to the first version:
apiVersion: v1
kind: Service
metadata:
name: darwin-service-a
spec:
type: LoadBalancer
selector:
app: nginx
version: v1.0.0
ports:
- name: http
port: 80
targetPort: 80
And here is the service pointing to the second version:
apiVersion: v1
kind: Service
metadata:
name: darwin-service-b
spec:
type: LoadBalancer
selector:
app: nginx
version: v2.0.0
ports:
- name: http
port: 80
targetPort: http
Once the required tests are performed and the second version is approved, the first service’s selector is changed to v2.0.0
:
apiVersion: v1
kind: Service
metadata:
name: darwin-service-a
spec:
type: LoadBalancer
selector:
app: nginx
version: v2.0.0
ports:
- name: http
port: 80
targetPort: http
If the application behaves as expected, v1.0.0
is discarded.
Canary Deployment
In the canary strategy, a subset of users is routed to the pods that are hosting the new version. This subset is increased progressively while those connected to the old version are reduced. This strategy involves comparing the subsets of users connected to the two versions. If no bugs are detected, the new version is rolled out to the rest of the users.
Using the Canary Deployment Strategy
The process for a native Kubernetes canary deployment involves the following:
1. Deploy the needed number of replicas to run version 1 by:
Deploying the first application:
$ kubectl apply -f darwin-v1.yaml
Scaling it up to the needed number of replicas:
$ kubectl scale --replicas=9 deploy darwin-v1
2. Deploy an instance of version 2:
$ kubectl apply -f darwin-v2.yaml
3. Test if the second version was successfully deployed:
$ service=$(minikube service darwin --url)
$ while sleep 0.1; do curl "$service"; done
4. If the deployment is successful, scale up the number of instances of version 2:
$ kubectl scale --replicas=10 deploy darwin-v2
5. Once all replicas are up, you can delete version 1 gracefully:
$ kubectl delete deploy darwin-v1
A/B Deployment
With A/B deployments, administrators can route a specific subset of users to a newer version with a few limitations and/or conditions. These deployments are mostly performed to assess the user base’s response to certain features. A/B deployment is also referred to as a "dark launch" since users are uninformed on the inclusion of newer features during testing.
Using the A/B Deployment Strategy
Here’s how to perform A/B testing using the Istio service mesh, which helps roll out the versions using traffic weight:
1. Assuming Istio is already installed on the cluster, the first step is to deploy both versions of the application:
$ kubectl apply -f darwin-v1.yaml -f darwin-v2.yaml
2. The versions can then be exposed via the Istio Gateway to match requests to the first service using the command:
$ kubectl apply -f ./gateway.yaml -f ./virtualservice.yaml
3. The Istio VirtualService
rule can then be applied based on weight using the command:
$ kubectl apply -f ./virtualservice-weight.yaml
This splits traffic weight among versions on a 1:10 ratio. To shift the weight of traffic, the weight of each service is edited, after which the VirtualService
rule is updated through the Kubernetes CLI.
When to Use Each Advanced Deployment Strategy
Since Kubernetes use cases vary based on availability requirements, budgetary constraints, available resources, and other considerations, there is no one-size-fits-all deployment strategy. Here are quick takeaways to consider when it comes to choosing the right deployment strategy:
COMPARING KUBERNETES DEPLOYMENT STRATEGIES | |||
---|---|---|---|
Strategy | Takeaway | Key Pros | Key Cons |
Blue-green |
|
|
|
Canary |
|
|
|
A/B |
|
|
|
Summary
Kubernetes objects are among the technology's core functionality, allowing rapid delivery of application updates and features. With deployment resources, Kubernetes administrators can establish an efficient versioning system to manage releases while ensuring minimal to zero application downtime. Deployments allow administrators to update pods, roll back to earlier versions, or scale-up infrastructure to facilitate growing workloads.
The advanced Kubernetes deployment strategies covered here also enable administrators to route traffic and requests toward specific versions, allowing for live testing and error processing. These strategies are used to ensure newer features work as planned before the administrators and developers fully commit the changes. While deployment resources form the foundation of persisting application state, it is always recommended to diligently choose the right deployment strategy, prepare adequate rollback options, and consider the dynamic nature of the ecosystem that relies on multiple loosely coupled services.
Additional Resources:
- Using kubectl to Create a Deployment
- Kubernetes Deployment Use Cases
- States of a Kubernetes Deployment Lifecycle
This is an article from DZone's 2021 Kubernetes and the Enterprise Trend Report.
For more:
Read the Report
Opinions expressed by DZone contributors are their own.
Comments