A Simplified Guide to Deploying Kubernetes Clusters
We will discuss the steps for deploying a Kubernetes cluster, delving into the complexities involved, and offering troubleshooting tips to address common issues.
Join the DZone community and get the full member experience.
Join For FreeKubernetes has become the de facto standard for container orchestration, providing robust tools for deploying, scaling, and managing containerized applications. While it offers a powerful platform for managing applications across multiple nodes, the initial setup can be daunting. Bringing up a multi-node cluster introduces additional layers of configuration that must work together in harmony.
The procedures and options for deploying a Kubernetes multi-node cluster cover networking complexities, resource allocation, security configurations, and operational overheads. Depending on your infrastructure (whether on-premises, cloud, or hybrid) and your case-specific requirements, several common deployment methods exist, each with its own advantages and trade-offs. We’ll explore these to help you choose the best approach for your environment.
Using Managed Kubernetes Services (Easiest)
Managed Kubernetes services handle much of the setup and maintenance work for you, making them ideal if you don't require deep customization or prefer not to manage the cluster manually. These services typically offer benefits like auto-scaling, automated updates, and built-in cloud-native security. However, it’s important to consider potential downsides, like higher costs associated with managed services and the risk of vendor lock-in, which might limit your flexibility in the long term.
Popular managed services include:
- Google Kubernetes Engine (GKE) (Google Cloud)
- Amazon Elastic Kubernetes Service (EKS) (AWS)
- Azure Kubernetes Service (AKS) (Azure)
Use kubectl
to manage the cluster. For GKE, EKS, and AKS, you’ll download credentials with respective cloud CLI tools to connect kubectl
to your cluster.
Each service comes with its own ecosystem of cloud-native integrations, making it easier to deploy and scale. GKE is known for integrating with Google’s AI/ML tools, while EKS offers deep integration with the broader AWS ecosystem, including security and monitoring tools like IAM and CloudWatch. Keep this in mind when choosing.
Using kubeadm (Self-Managed Cluster)
If you prefer more control over the infrastructure and want to deploy a self-managed Kubernetes cluster on your own machines, kubeadm
is a popular choice. However, it requires a higher level of expertise and commitment to maintenance, as you'll need to handle tasks like network setup, security configuration, and upgrades yourself.
Prerequisites
- Minimum of 2 nodes (1 control plane, 1 worker).
- Linux installed (Ubuntu, CentOS, etc.).
- Docker or another container runtime installed.
kubeadm
,kubelet
, andkubectl
installed.
Steps
1. Prepare the Machines:
- Install Docker on all machines.
- Install
kubeadm
,kubelet
, andkubectl
on all machines. - Disable swap (Kubernetes doesn’t work with swap enabled).
- Set up required networking ports and firewall rules.
2. Initialize the Control Plane Node: On the Master Node, run:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Save the output, especially the command that lets worker nodes join the cluster.
3. Set Up kubectl on the Control Plane:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4. Install a Pod Network Add-On: For example, to install Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
5. Join Worker Nodes: On each worker node, use the join command provided during the kubeadm
init step:
sudo kubeadm join <master-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
6. Verify Cluster Setup: On the Master Node:
kubectl get nodes
Using Minikube (Local Development)
For local development, Minikube offers a lightweight option to run Kubernetes on a local machine. Minikube is an excellent choice for testing and developing environments where you don't need the full scale of a multi-node production cluster.
On Control Plane Node
1. Start Minikube on the Control Plane Node:
minikube start --nodes 1 --cpus 4 --memory 8192 --driver=docker
In this command:
--nodes 1
: Specifies the number of nodes to start with (initially 1 control plane node).--cpus 4
: Allocates 4 CPUs to the Minikube VM.--memory 8192
: Allocates 8GB of memory to the Minikube VM.--driver=docker
: Specifies the driver to use for running Minikube (e.g., Docker).
2. Verify the cluster:
kubectl get nodes
On Worker node
Add additional nodes to the cluster. You can add as many nodes as you need.
minikube node add --cpus 2 --memory 4096 –worker
Repeat this command as needed to add more worker nodes. For example, to add two more worker nodes:
minikube node add --cpus 2 --memory 4096 --worker
minikube node add --cpus 2 --memory 4096 –worker
Once you've added the nodes, you can verify the nodes in your cluster using kubectl
.
kubectl get nodes
You should see output similar to the following, showing multiple nodes:
NAME STATUS ROLES AGE VERSION
minikube Ready master 5m v1.21.0
minikube-m02 Ready <none> 2m v1.21.0
minikube-m03 Ready <none> 1m v1.21.0
Using K3s
K3s is a lightweight Kubernetes distribution designed for resource-constrained environments, such as small servers, IoT devices, or edge computing. Developed by Rancher Labs, K3s simplifies the Kubernetes setup while reducing its resource footprint.
Single-Node Installation
For a single-node setup, the installation process is straightforward:
curl -sfL https://get.k3s.io | sh –
This command downloads and installs K3s, setting up a single-node Kubernetes cluster. After installation, K3s runs as a systemd
service and creates a kubeconfig
file at /etc/rancher/k3s/k3s.yaml
.
Multi-Node Installation
For a multi-node setup, designate one node as the server (master) and the rest as agents (workers).
On the server node, run:
curl -sfL https://get.k3s.io | sh –
Retrieve Node Token
Obtain the token from the server node, which will be used by agent nodes to join the cluster:
cat /var/lib/rancher/k3s/server/node-token
Install K3s Agent
On each agent node, run the following command, replacing <SERVER_IP>
with the IP address of the server node and <NODE_TOKEN>
with the token retrieved in the previous step:
curl -sfL https://get.k3s.io | K3S_URL=https://<SERVER_IP>:6443 K3S_TOKEN=<NODE_TOKEN> sh –
Accessing the Cluster
To interact with the K3s cluster, copy the kubeconfig
file from the server node to your local machine:
scp user@<SERVER_IP>:/etc/rancher/k3s/k3s.yaml ~/.kube/config
Set the KUBECONFIG environment variable:
export KUBECONFIG=~/.kube/config
Deploying Applications
With the cluster up and running, you can deploy applications using kubectl
. For example, to deploy an Nginx web server:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
Troubleshooting issues
During the bring-up phase for each of these methods, various issues can arise, ranging from network misconfigurations to resource limitations. Let’s discuss a systematic approach to troubleshooting the common issues, ensuring a smooth and successful cluster setup.
Network Problems
Issue
Pods cannot communicate with each other or with external services.
Troubleshooting Steps
Check CNI Plugin: Ensure that the Container Network Interface (CNI) plugin (e.g., Flannel, Calico) is correctly installed and running. Check the status of the CNI plugin pods:
kubectl get pods -n kube-system
Network Policies: Verify that network policies are not inadvertently blocking traffic. Review and adjust network policies as needed.
Node IP Configuration: Ensure the nodes have correct IP configurations and can reach each other. Use the ping command to test connectivity between nodes.
Node Connectivity Issues
Issue
Worker nodes cannot join the cluster or become unresponsive.
Troubleshooting Steps
Check Node Token: Ensure the correct node token is being used when joining worker nodes to the cluster. Verify the token on the server node:
cat /var/lib/rancher/k3s/server/node-token
Firewall Rules: Ensure that firewall rules allow traffic on the necessary ports (e.g., 6443 for the API server). Update firewall settings if needed.
Node Log: Check the logs on the worker nodes for any errors related to joining the cluster:
sudo journalctl -u kubelet
sudo journalctl -u k3s-agent (for K3s)
Resource Constraints
Issue
Pods are not scheduling due to insufficient resources.
Troubleshooting Steps
Resource Requests and Limits: Ensure that pods have appropriate resource requests and limits defined. If not, they may fail to schedule due to resource constraints.
Node Resources: Verify that nodes have sufficient CPU and memory resources available. Use the following command to check node resources:
kubectl describe nodes
Cluster Autoscaler: If using a cluster autoscaler, ensure it is correctly configured to add or remove nodes based on resource demands.
Configuration Errors
Issue
Misconfigurations in manifests or deployment scripts cause failures.
Troubleshooting Steps
Validate Manifests: Use kubectl apply --dry-run
to validate YAML manifests before applying them to the cluster:
kubectl apply -f <manifest-file> --dry-run=client
Check Logs: Review the logs of the Kubernetes components for configuration-related errors:
sudo journalctl -u kubelet
sudo journalctl -u k3s (for K3s)
Config Files: Verify that configuration files (e.g., kubeconfig
, deployment scripts) are correctly formatted and contain valid values.
Use kubectl
exec to access a shell inside a running pod and diagnose issues from within the container:
kubectl exec -it <pod-name> -- /bin/sh
Summary
We’ve explored several methods for deploying Kubernetes clusters, each suited to different needs and environments:
- For cloud-based clusters, managed services like GKE, EKS, and AKS are the easiest.
- For on-prem or self-managed clusters,
kubeadm
offers flexibility and control. - For development and testing, Minikube or k3s are excellent lightweight options.
By following these guidelines, you can deploy a Kubernetes cluster suited to your specific infrastructure and start taking advantage of its powerful orchestration capabilities.
References:
- Marco Luksa's Kubernetes in Action
- Nebrass Lamouchi's Getting Started with Kubernetes in Pro Java Microservices with Quarkus and Kubernetes: A Hands-on Guide
- The Linux Foundation's Kubeadm
- The Linux Foundation's Minikube
- Rancher Lab's K3s Documentation
Opinions expressed by DZone contributors are their own.
Comments