Setting Up a CI/CD Pipeline With Spring MVC, Jenkins, and Kubernetes on AWS
These three tools will automate the creation and deployment of a Docker image on AWS.
Join the DZone community and get the full member experience.
Join For FreeThe purpose of this post is to show you how to set up a CI/CD pipeline using Jenkins and deploy it to a Kubernetes cluster.
First, a quick intro to continuous integration (CI) and continuous deployment (CD). CI is the process of integrating code changes to a shared code repository. Steps include compiling, validating, unit testing, and integration testing. It’s good practice to commit small logically correct changes frequently rather than a big change infrequently.
You may also enjoy: Secure and Scalable CI/CD Pipeline With AWS
The next step is continuous deployment. The integrated code needs to be deployed to servers in the assigned environment. For example, if you are handling a colossal system like Facebook, you wouldn’t go into thousands of servers to manually to deploy your code. You need an automated system to do that for you.
Ours is a Spring MVC project which gets deployed to a Tomcat server as a WAR file. We will create a Docker image of the Tomcat server with the WAR file in it and push it to a Docker registry. Then Kubernetes would pull that image and deploy our application giving us an endpoint to query to.
Setting Up Dockerfile
Docker is a containerization service. If you are not familiar with containers and difference between containers and virtual machines here’s a video to get started.
Once you’ve created a Docker image, it needs to be stored and updated somewhere. Some options are Docker Hub, which is a hosted registry, or Docker registry, which is open source and free. In this example, we will use Docker registry.
First, we’ll pull a Tomcat image and customize it. By customizing I mean that if you want to add any libraries to the lib folder in Tomcat then you need to pull the image, run Tomcat and copy all the libraries into it, then tag that new image and push it to Docker registry (you can pull the Tomcat image from here).
Here’s the process. Say we want to pull tomcat:9-jre11. Run the below code to pull the Tomcat image
docker run -d -p 8080:8080 tomcat:9-jre11 (note: -d is to run container in the background and -p is for port to use)
Next, to copy the provided libraries from local tomcat to the Tomcat image use the Docker cp
command
docker cp folderName containerId:usr/local/tomcat
Next, use docker commit
to save the changes and create a new tag for the image.
docker commit containerId tomcat:newTomcatCustom
Each time you make any changes, commit using the same tag that you initially used.
To go inside the container and make changes, use this command docker exec -it containerId bash
after running the image and then commit after making changes.
So now our custom Tomcat is ready. Let’s setup Dockerfile which would deploy the video war file in the Tomcat image.
Dockerfile content:
FROM tomcat:newTomcatCustom
COPY /path/to/filename.war /usr/local/tomcat/webapps/filename.war
Build and run this Dockerfile, and your Tomcat server will be up with the war file deployed.
Pushing to Docker Registry
To push the Docker image to the Docker registry, set up a Docker registry and then push the Docker image using the below commands.
docker tag containerId server-ip-registry:port/tag
docker push server-ip-registry:port/tag
If the Docker registry is not using HTTPS, you may get an insecure repository error. To solve for this, either make it HTTPS and add the certificate in Docker or create a file at /etc/docker called daemon.json and add the following JSON to it. This JSON would indicate the insecure registry (non https) where the Docker image is being pushed.
{
"insecure-registries" : [ "registryLink:port" ]
}
Amazon EKS
We’ve created an image and pushed it to a common accessible point. Now we’ll set up a Kubernetes cluster on AWS EKS. In production, one Docker running server would not be sufficient to take the load of all users so you need to set up a cluster with multiple servers having Docker images up and running. All of this is handled by Kubernetes.
If you are not familiar with Kubernetes, take a look at the Kubernetes architecture and its various components like replication controller, pods, services, replica sets, deployments etc. AWS has a well documented page here.
After your cluster is up and running the next step is to create a replication controller and a service file. The replication controller would take care of the number of pods and their replicas that are to be maintained and the service file will give us the IP address to connect to our Spring REST APIs. Each pod in the Kubernetes cluster has its own IP address which is known to the services, and the services provide an abstraction to decouple the frontend and the backend.
Replication Controller and Services
In this section, we are going to create a replication controller and a service file for deployment purposes. You could create a separate master and slave for both but a better alternative is to create a deployment file, which has advantages over a replication controller. But once you know the basics, you can use any approach.
Our replication controller would be a JSON file...but you can also create a YAML file.
kind: ReplicationController
apiVersion: v1
metadata:
name: videobook-controller-1
labels:
app: videobook-controller-1
spec:
replicas: 3
selector:
app: videobook
deploy: firstVersion
template:
metadata:
labels:
app: videobook
deploy: firstVersion
spec:
containers:
- name: videobook
image: server-ip:5000/dockerImageName
imagePullPolicy: Always
ports:
- name: http-server
containerPort: 8080
Labels and Selectors: Labels are used to group while selectors are for uniqueness. For example, if there are multiple replication controllers then labels can denote whether it’s the staging or production environment so changes in the image can be made according to the environment. The replication controller file is self-explanatory. The image consists of the image tag, which was pushed to the Docker registry – here, selectors are useful to uniquely identify the pods and the service file (we’ll talk about this later in the post).
Now run the command. This is the starting point of your cluster. This command would initiate the controller
kubectl create -f video-controller.yaml
Output: replicationcontroller/videobook-controller-1 created
To check if the pods are running, run this code:
kubectl describe replicationcontroller/videobook-controller-1
Now let’s deploy the service file:
kind: Service
apiVersion: v1
metadata:
name: videobook-servic
labels:
app: videobook-servic
spec:
ports:
- port: 8080
targetPort: http-server
selector:
app: videobook
type: LoadBalancer
The only thing to be careful of here is that the app value matches the app value in replication controller (in our case, it’s videobook). This would let the service file know about the pods and would then connect to their IPs.
kubectl create -f video-service.yaml
kubectl get services
This will give you a link, which will be live after a minute or two. Check port 8080 to test if it’s working for Tomcat.
Rolling Updates
Your application is now managed by Kubernetes. The next major part is rolling updates; if your image is updated in the Docker registry, then Kubernetes should deploy it to its pods and update the application backend system. This is achieved by performing rolling updates in Kubernetes.
Kubernetes rolling updates provides the functionality to deploy changes with 0% downtime.
The rolling update should make sure that the service endpoint is not tampered with because that endpoint would then also be used as the frontend.
For rolling updates, the controller is the configuration file that would be passed. To perform rolling updates, there are certain criteria to be met.
- Specify a different metadata.name value.
- Overwrite at least one common label in its spec.selector field.
- Use the same metadata.namespace
Currently, we have only two YAML files, service and controller files. For rolling updates, at least one field in the selector should change. Let’s say that field is “deploy” and let’s also update the metadata.name field. Since our project doesn’t use the metadata.namespace field, we can ignore those criteria.
Change the metadata.name (call it videobook-controller-2 ) and the selector.deploy field (call it secondVersion) and save the file. Then run the rolling-update command with the old metadata.name field value
kubectl rolling-update videobook-controller-1 -f video-controller.json
This would pull the Docker image and would update each pod without downtime and service endpoint would remain the same
CI/CD Pipeline
After the Kubernetes cluster is up and running, the next step to manage updates. Each time code is added/changed is needs to tested, pushed to Docker registry and then pulled by the Kubernetes cluster and deployed. This is all handled by the Jenkins pipeline.
Setting up a CI/CD pipeline is easy. First, install Jenkins. Jenkins default will start on port 8080. If you want to change the Jenkins port, go to /var/lib/jenkins and edit the HTTP_PORT
field to that port.
Next, go to the Jenkins link (localhost:8081) and let’s create a Jenkins pipeline. You can also create a freestyle project where you will simply write the commands, but in the pipeline, you can define stages and it’ll look more systematic.
Now choose the pipeline option on the home page. Click on the pipeline tab to write the pipeline script.
The sample pipeline script is:
def getTimeStamp(){
return sh (script: "date +'%Y%m%d%H%M%S%N' | sed 's/[0-9][0-9][0-9][0-9][0-9][0-9]\$//g'", returnStdout: true);
}
node('master'){
stage('Init'){
script{
env.TIMESTAMP = getTimeStamp();
env.REGISTRY_LINK = '<IP>:<PORT>/testitkuber'
}
}
stage('projectInstall') {
git credentialsId: '829494b2-fb3e-4374-8514-47c89e52633f', url:'bitbucket_url'
dir('path/to/the/spring/project') {
sh '’’mvn test’’’
sh '’’mvn install’’'
}
}
stage('dockerBuild') {
dir('/path/to/the/spring/project/’) {
sh '''
docker build -t ${TIMESTAMP} .
docker tag ${TIMESTAMP} ${REGISTRY_LINK}
docker push ${REGISTRY_LINK}
'''
}}
stage(rollingUpdates){
dir('/path/to/the/kubernetes/files') {
sh '''
kubectl rolling-update video-controller-1 -f video-controller.json
''’
}
}
}
Note the node(‘master’)
in the code – this indicates that the server is the master node. It’s compulsory to include this command or you’d get an error. Node
specifies where changes will happen; master
is the name assigned to it.
Next, click on "Pipeline Syntax" and in the sample step you can enter the operation you need and then type the command, and generate pipeline script.
Troubleshooting
- Amazon EKS uses the aws-iam-authenticator for authentication purposes and you might need to move the aws-iam-authenticator file to /bin folder if not already present
- You might also need to move the .kube/config folder to var/lib/jenkins
- If you are using an insecure Docker registry, you’ll need to add the insecure-registry json in all the daemon.json files of the servers spawned by cloud formation
Conclusion
That’s the complete process to create the Jenkins pipeline. This pipeline pulls the code from Bitbucket, tests it, installs it, creates a Docker image, pushes it to the Docker registry, and rolls out any updates. This process is useful because it drastically decreases code review time and testability improves due to smaller, specific changes. There is also an option to revert back to the previous deployment with the Kubernetes rollback option.
We used Jenkins in this example, but there are other tools such as Atlassian’s Bamboo or Netflix’s Spinnaker which you can explore as well.
Further Reading
Easily Automate Your CI/CD Pipeline With Jenkins, Helm, and Kubernetes
Published at DZone with permission of Pulkit Kedia, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments