An Angular PWA From Front-End to Backend: Kubernetes Deployment
This tutorial shows how to deploy a Angular PWA on a Kubernetes cluster with a Helm Chart.
Join the DZone community and get the full member experience.
Join For FreeThis is the fourth part of the series about the AngularPwaMessenger project. It is a chat system with offline capability. The third part showed how send and receive messages.
This part will be about the deployment of the AngularPwaMessenger on a Kubernetes cluster with a Helm Chart. The Kubernetes cluster is provided by Minikube(V1.0.0 or newer) and Ingress is used as ssl endpoint. The image is available at DockerHub.
Achitecture
The basic architecture is a Kubernetes cluster with an Ingress controller to terminate SSL. Ingress will connect via a service to the AngularPwaMessener pod and that will connect via a service to the MongoDB database. The Ingress controller is needed for the SSL connection because the AngularPwaMessenger uses the crypto API and that is only available in the browser for SSL connections or localhost.
Basic Setup
The basic setup is done with a Helm Chart. The chart consists of the files Chart.yaml, values.yaml, kubTemplate.yaml, and _helpers.tpl. With these files the basic setup can be run. Lets have a look at them:
The Chart.yaml:
apiVersion: v1
appVersion: "1.0"
description: AngularPwaMessenger Config
name: AngularPwaMessenger
version: 0.1.0
The Chart.yaml defines the apiVersion and the Metadata of the Helm Chart like name and version of the chart.
The values.yaml:
webAppName: angularpwamsg
dbName: mongodbserver
webImageName: angular2guy/angularpwamessenger
webImageVersion: latest
dbImageName: mongo
dbImageVersion: 3.6.6
volumeClaimName: mongo-pv-claim
persistentVolumeName: task-pv-volume
webServiceName: angularpwa
dbServiceName: mongodb
The values.yaml provides the variables used in the template of the helm chart.
The kubTemplate.yaml is long because of that it will be shown in sections:
kind: PersistentVolume
apiVersion: v1
metadata:
name: {{ .Values.persistentVolumeName }}
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/mongo1
type: DirectoryOrCreate
This part sets up the persistent volume to store the data in the MongoDB database. The name is retrieved from the values.yaml file.
Line 8 defines the
storageClassName
asmanual
to get a local storage volume (supported by Minikube).Lines 9-10 defines the
accessModes
and sets them toReadWriteOnce
so only one pod can use it.Lines 11-12 define the storage capacity to 1GB.
Lines 13-15 defines the
hostPath
's path to/data/mongo1
because Minikube persists the files in the/data
directory on the host computer. ThehostPath
type is set toDirectoryOrCreate
to use the directory or create it if does not exist.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.volumeClaimName }}
labels:
app: mongopv
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 1Gi
This part sets up the persistent volume claim. It retrieves the name from the values.yaml file and claims the volume the was just created.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.dbName }}
labels:
app: {{ .Values.dbName }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.dbName }}
template:
metadata:
labels:
app: {{ .Values.dbName }}
spec:
containers:
- name: {{ .Values.dbName }}
image: "{{ .Values.dbImageName }}:{{ .Values.dbImageVersion }}"
ports:
- containerPort: 27017
volumeMounts:
- name: hostvol
mountPath: /data/db
volumes:
- name: hostvol
persistentVolumeClaim:
claimName: {{ .Values.volumeClaimName }}
This part sets up the pod for MongoDB. The names and the labels are set with values of the values.yaml file. The number of replicas has to be 1
because to run more MongoDB instances they need to be configured in MongoDB. The important part is the spec with the container and the volume.
In line 18 the container gets its name out of the values.yaml file.
In line 19, the Docker image is set with the values for image name and version out of the values.yaml file.
In line 21 the open port is defined to connect to MongoDB.
In lines 22-24. where to mount the
volumeClaim
in the Docker image to store the DB is defined.In lines 25-28, the name of the
volumeClaim
and thevolumeClaim
to be used are defined.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.dbServiceName }}
labels:
app: {{ .Values.dbServiceName }}
spec:
ports:
- port: 27017
protocol: TCP
selector:
app: {{ .Values.dbName }}
This part sets up the service for the MongoDB. The service name and label are set with the values.yaml file. The selector is set to the app of the MongoDB instance of the values.yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.webAppName }}
labels:
app: {{ .Values.webAppName }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.webAppName }}
template:
metadata:
labels:
app: {{ .Values.webAppName }}
spec:
containers:
- name: {{ .Values.webAppName }}
image: "{{ .Values.webImageName }}:{{ .Values.webImageVersion }}"
imagePullPolicy: Always
env:
- name: MONGODB_HOST
value: {{ .Values.dbServiceName }}
ports:
- containerPort: 8080
This part sets up the pod for the AngularPwaMessenger. The names and labels are set again with the values.yaml file. The application is stateless so the number of replicas can be changed. Then there is the spec:
Lines 17-19 define the name of the container and the Docker image to run.
Line 20 makes is so that every time the pod is restarted the image is pulled again to make sure that it is the current image of Docker Hub.
Lines 21-23 define the
MONGODB_HOST
environment variable to the value of the service of the MongoDB instance. The AngularPwaMessenger connects to localhost if the variable is not set and to the server in the variable if it is set.Lines 24-25 defines the open port of the container.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.webServiceName }}
labels:
run: {{ .Values.webServiceName }}
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
selector:
app: {{ .Values.webAppName }}
This part defines the service to connect to the AngularPwaMessenger pod. The name and label are set with the values.yaml file and the app is set to the AngularPwaMessenger pod.
That is the Helm Chart. It is basicly the separation of the configuration and the configuration values. This is an improvment compared to the YAML for Kubectl because it is now easy to change the values and Helm deploys the complete chart.
To test the chart, Minikube has to be started with the minikube start
command and Helm needs to be installed wit the helm init
command.
To run the chart these commands can be used:
#!/bin/sh
helm delete --purge messenger
helm install ./ --name messenger --set serviceType=NodePort
Now the system is deployed but it lacks the SSL connection the browsers require to use the crypto API. So now lets set up Ingress as an SSL endpoint and use the SSL certificates.
Ingress Setup
1. First, the 'minikube ip' needs to be checked. Put an entry of the Minikube IP and the name of the Minikube instance in the host file of your OS. For example, '192.168.99.100 minikube'.
2. Then add Ingress to Minikube: 'minikube addons enable ingress'.
3. Then the Kubernetes secret needs to be created. If your Minikube IP is 192.168.99.100 then the ca.crt and ca.key can be used. Otherwise, please have a look at addIngress.sh and csr.conf. Your Minikube IP has to be set in csr.conf
(lines 25-26) and then the OpenSSL steps in addIngress.sh
need to be run. To create the Kubernetes secret execute:
kubectl create secret tls minikube-tls –cert=ca.crt –key=ca.key
Ingress needs to be configured with the ingress.yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: angularpwa-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- secretName: minikube-tls
hosts:
- minikube
rules:
- host: minikube
http:
paths:
- backend:
serviceName: angularpwa
servicePort: 8080
path: /?(.*)
The YAML sets up the Ingress controller and sets the name.
Line 6 has the type of ingress.
Line 7 sets the http->https redirect
Line 8 sets the rewrite target because the AngularPwaMessenger does its own rewriting to support multiple languages.
Lines 9-13 set the secrect to use on the DNS name 'minkube' of the hosts file for TLS.
Lines 14-21 define the host Minikube with the
serviceName
andservicePort
of the Helm Chart. Then the path to the AngularPwaMessenger is set as regex because multiple paths need to be supported.
4. Then the ingress.yaml can be run with the command, kubectl create -f ./ingress.yaml
.
The commands can be found in the addIngress.sh file.
5. The last step is to open Chrome and to add the ca.crt
file in the settings in authorities and trust it for SSL encrypted web sites.
6. Then finally the url 'https://minikube/' can be called and the system works. The certificate is shown as unsafe because it is self signed but that is no problem for testing locally. For testing each user needs its own browser.
Conclusion
This configuration looks long but Kubernetes with Helm provides a MongoDB instance with persistent data, a setup for a Spring Boot application, and Ingress to support SSL. That needs to be compared to a simiar setup on one or more servers (OS of your choice). Compared to that the configuration is simpler than expected. I hope this setup can help people use Kubernetes with Helm. The combination is very helpful to simplify and standardize the deployment of systems with more than one container.
This is the last article on the AngularPwaMessenger. It was fun to use all the technologies in the project and I have to say thank you to the teams that provide them.
Opinions expressed by DZone contributors are their own.
Comments