Kubernetes Cluster Setup on Ubuntu, Explained
Provides guidelines for setting up the Kubernetes cluster, understanding internetworking of pods, and other aspects related to the Kubernetes cluster.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
The purpose of this article is to provide guidelines for those who are interested in the details of setting up the Kubernetes cluster, understanding internetworking of pods, and other aspects related to the Kubernetes cluster.
This article provided the details for setting up Kubernetes Cluster on Ubuntu. The main topics are the following:
- Overview Kubernetes Architecture
- Container Network Interface with Calico
- Detailed Procedures for Setting up Kubernetes Cluster
- Troubleshooting Procedures
For the Kubernetes setup, I use three nodes cluster in which we have one master node and two worker nodes. The following are the specific types for the Kubernetes cluster
- Container Runtime: containerd
- Network Policy and CNI: Calico
- Operating System: Ubuntu
Architecture Overview
The following diagram illustrates the components in Kubernetes clusters. Kubernetes nodes fall into two categories: Control Plane and Worker Nodes. All the applications are running on the worker nodes as containers inside the pod.
A few key points worth noting:
- The API server is the brain of the cluster. Virtually all the communication and administration are carried out by this component
- The communication between worker nodes and the control plane is through kubelets and API-server.
For CNI/network policy, I use Calio. There are many CNI providers. The following table is a key summary of open-source CNI providers:
Flannel |
Calico | Cilium | Weavenet | Canal | |
---|---|---|---|---|---|
Mode of Deployment | DaemonSet | DaemonSet | DaemonSet | DaemonSet | DaemonSet |
Encapsulation and Routing | VxLAN | IPinIP,BGP,eBPF | VxLAN,eBPF | VxLAN | VxLAN |
Support for Network Policies | Yes | Yes | Yes | Yes | Yes |
Datastore used | Etcd | Etcd | Etcd | No | Etcd |
Encryption | Yes | Yes | Yes | Yes | No |
Ingress Support | No | Yes | Yes | Yes | Yes |
Enterprise Support | Yes | Yes | No | Yes | No |
Three big cloud providers, Azure, GCP, and AWS have the following CNI:
- Microsft has Azure CNI
- Google GKE uses kubenet CNI which is on top of Calico
- AWS uses Amazon VPN CNI
Kubernetes Pod Networking With Calico
Setup Details
In this section, I will describe the details for setting up the Kubernetes cluster.
Pre-Setup
Setup Kubeetc autocomplete on the Master node:
Add the following to the ~/.bashrc
on the master node:
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
alias k=kubectl
complete -o default -F __start_kubectl k
Common Commands On All Nodes
Change Hostname
sudo hostnamectl set-hostname "k8smaster.ggl.com"
sudo hostnamectl set-hostname "k8sworker1.ggl.com"
sudo hostnamectl set-hostname "k8sworker2.ggl.com"
Add Entries to /Etc/Hosts
172.31.105.189 k8smaster.ggl.com k8smaster
172.31.104.148 k8sworker1.ggl.com k8sworker1
172.31.100.4 k8sworker1.ggl.com k8sworker2
Disable Swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
cat /etc/fstab
Note: The reason for disabling the memory swapping (swapoff) is for stability and performance considerations. This is the required step in any Kubernetes setup (AKS, GKE, EKS).
Load Kernel Modules
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
lsmod | egrep "overlay|br_net"
Note: Kubernetes uses an overlay kernel module for the file systems. Linux kernel br_netfilter
is for forwarding IP4 traffic and letting iptable see the bridged traffic.
Set Kernel Parameters
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Enable Docker Repository
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install and Enable Containered Runtime
sudo apt update
sudo apt install -y containerd.io
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
cat /etc/containerd/config.toml
sudo systemctl restart containerd.service
sudo systemctl enable containerd
sudo systemctl status containerd
Add Kubernetes Repository
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt update
Install kubeadm kubelet kubectl
sudo apt install -y kubelet kubeadm kubectl
After executing the above commands on all the nodes (master and worker nodes), we will need to perform the following:
- Initialize the Kubernetes clusters with kubeadm init command
- Have worker nodes join the cluster
Control Plane Setup Commands
Initialize Kubernetes Cluster
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint=k8smaster.ggl.com
The following is the output from the kubeadm init command:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8smaster.ggl.com:6443 --token 7sx7ky.z54u1q0pexh5vh25 \
--discovery-token-ca-cert-hash sha256:6cce1257cfdbd54c981ad64e2a553711e276afc402a52e3899a4725470902686 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8smaster.ggl.com:6443 --token 7sx7ky.z54u1q0pexh5vh25 \
--discovery-token-ca-cert-hash sha256:6cce1257cfdbd54c981ad64e2a553711e276afc402a52e3899a4725470902686
Note: the joining commands for master nodes and worker node are different with only option --control-plane.
Install Calico Container Network Interface (CNI)
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
Check Cluster Information
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster.ggl.com Ready control-plane 6m22s v1.26.1
Note: At this moment, we have not joined the worker node yet. Thus we have only one node is in the ready state.
Open Kubernetes Ports
$ sudo lsof -iTCP -nP | egrep LISTEN
systemd-r 515 systemd-resolve 14u IPv4 15918 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 935 root 3u IPv4 17128 0t0 TCP *:22 (LISTEN)
sshd 935 root 4u IPv6 17130 0t0 TCP *:22 (LISTEN)
node 1464 root 18u IPv4 20071 0t0 TCP *:31297 (LISTEN)
container 3743 root 13u IPv4 35540 0t0 TCP 127.0.0.1:45019 (LISTEN)
kube-sche 6398 root 7u IPv4 47451 0t0 TCP 127.0.0.1:10259 (LISTEN)
kube-cont 6425 root 7u IPv4 47423 0t0 TCP 127.0.0.1:10257 (LISTEN)
kube-apis 6446 root 7u IPv6 48257 0t0 TCP *:6443 (LISTEN)
etcd 6471 root 7u IPv4 47402 0t0 TCP 172.31.105.189:2380 (LISTEN)
etcd 6471 root 8u IPv4 47406 0t0 TCP 127.0.0.1:2379 (LISTEN)
etcd 6471 root 9u IPv4 47407 0t0 TCP 172.31.105.189:2379 (LISTEN)
etcd 6471 root 14u IPv4 48266 0t0 TCP 127.0.0.1:2381 (LISTEN)
kubelet 6549 root 23u IPv6 47676 0t0 TCP *:10250 (LISTEN)
kubelet 6549 root 26u IPv4 47683 0t0 TCP 127.0.0.1:10248 (LISTEN)
kube-prox 6662 root 14u IPv4 49323 0t0 TCP 127.0.0.1:10249 (LISTEN)
kube-prox 6662 root 15u IPv6 49330 0t0 TCP *:10256 (LISTEN)
calico-no 7377 root 10u IPv4 55531 0t0 TCP 127.0.0.1:9099 (LISTEN)
Open Ports For the Pods Communication
sudo ufw status verbose
sudo ufw allow 6443/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 2379:2380/tcp
Print Commands for Worker Node To Join the Cluster
sudo kubeadm token create --print-join-command
Worker Node Commands
Join The Kubernetes Cluster
sudo kubeadm join k8smaster.example.net:6443 \
--token ufkijl.ukrhpo372w6eoung \
--discovery-token-ca-cert-hash sha256:e6b04ca3f6f4258b027d22a5de4284d03d543331b81cae93ec4c982ab94c342f
Open Ports On Worker Node
sudo ufw status
sudo ufw allow 10250
sudo ufw allow 30000:32767/tcp
sudo ufw status
Verify The Kubernetes Setup
Create NGINX Deployment
kubectl create deployment nginx-app --image=nginx --replicas=2
kubectl get deployments.apps nginx-app
kubectl expose deployment nginx-app --type=NodePort --port 80
kubectl get svc nginx-app
kubectl describe service nginx-app
$ k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-app-5d47bf8b9-ps6q2 1/1 Running 2 (169m ago) 23h 192.168.159.4 k8sworker1.ggl.com <none> <none>
nginx-app-5d47bf8b9-xbdzz 1/1 Running 2 (169m ago) 23h 192.168.186.196 k8sworker2.ggl.com <none> <none>
$ kubectl describe service nginx-app
Name: nginx-app
Namespace: default
Labels: app=nginx-app
Annotations: <none>
Selector: app=nginx-app
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.76.203
IPs: 10.105.76.203
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32592/TCP
Endpoints: 192.168.159.4:80,192.168.186.196:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Note: the nginx service is run on the port 32592.
$ curl http://k8sworker1:32592
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
This means everything is working.
Troubleshooting Techniques
During setup, in general, everything works pretty smoothly. However, sometimes, there may be firewalls blocking access to the kube-API service. In this case, run the following command:
$ sudo lsof -iTCP | egrep LISTEN
Check which port the kube-API is listening. In my case, it looks like the following:
kube-apis 1346 root 7u IPv6 19750 0t0 TCP *:6443 (LISTEN)
Thus, we need to verify the port is open by the following command from the worker node:
$ telnet k8smaster 6443
Trying 172.31.105.189...
Connected to k8smaster.ggl.com.
Escape character is '^]'.
This means it is working. If it hangs, it means the port is not open
To troubleshoot network issues, the following Linux tools are very useful:
- lsof
- nmap
- netstat
- telnet
- ping
Opinions expressed by DZone contributors are their own.
Comments