JobRunr + Kubernetes + Terraform
Deploy the JobRunr application to a Kubernetes cluster on the Google Cloud Platform (GCP) using Terraform.
Join the DZone community and get the full member experience.
Join For FreeIn this new tutorial, we will build further upon on our first tutorial — Easily process long-running jobs with JobRunr — and deploy the JobRunr application to a Kubernetes cluster on the Google Cloud Platform (GCP) using Terraform. We then scale it up to 10 instances to have a whopping 869% speed increase compared to only one instance!
This tutorial is a beginners guide on the topic cloud infrastructure management. Feel free to skip to the parts that interest you.
Kubernetes, also known as k8s, is the hot new DevOps tool for deploying high-available applications. Today, there are a lot of providers all supporting Kubernetes including the well known Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS).
Although the world is currently in difficult times because of COVID-19, Acme Corp (see the first tutorial) hired so many people that there are now about 10.000 employees working for them. Acme Corp's CEO insists that all employees get their weekly salary slip before Sunday 11pm but this has now become impossible - the amount of time it takes for generating that many salary slips is just too long.
Luckily, JobRunr is here to help as it is a distributed background job processing framework. In this tutorial we will:
- Create a Docker image from our
SalarySlipMicroservice
JobRunr application using Jib by Google - Upload the Docker image to a private Docker registry at Google
- Use Terraform to define our infrastructure as code which includes a Google Cloud Sql instance.
- Deploy a Kubernetes cluster using Terraform to Google Cloud
- Deploy one instance of the
SalarySlipMicroservice
JobRunr Docker image to the Kubernetes cluster - Start generating all the employee slips
- Scale to 10 instances of the
SalarySlipMicroservice
JobRunr application and all of this without any change to our production java code!
TLDR; you can find the complete project on our GitHub repository: https://github.com/jobrunr/example-salary-slip/tree/kubernetes
Postgres as Database
In the first version of our application, we used an embedded H2 Database. As we now go for a deployment on Google Cloud Platform (GCP), we will use a Cloud Sql Postgres instance. To do so, we need to change our DataSource
in the SalarySlipMicroService
as follows:
Dockerize It!
Since Kubernetes runs Pods - which are in fact one or more Docker Containers - we first need to create a Docker Image from our application. Jib
is a tool from Google to easily create Docker images from your Java application using only Maven or Gradle.
In our build.gradle
file, we add the following plugin:
If we now run the gradle command: ./gradlew jibDockerBuild
it will create a new Docker image for us, ready to run on Docker!
Install the Necessary Tools
We now need to install all the necessary tools and create a Google Cloud account:
- Google Cloud SDK: Google Cloud SDK is a set of tools that you can use to manage resources and applications hosted on Google Cloud Platform.
- Kubectl: Kubectl is a command line tool for controlling Kubernetes clusters.
- Terraform: Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a data center infrastructure using a high-level configuration language known as Hashicorp Configuration Language.
The installation for these tools is well explained and differs for each OS. Follow the installation guide for them and come back to the tutorial once you have done so.
We also need an account for Google Cloud. Using your browser navigate to https://console.cloud.google.com/ — when you first login to the Google Cloud Platform you get 300 € of free credit, more than enough for us. You can activate it on the top right.
Create the GCP Project
In this tutorial, we will use the terminal as much as possible - so fire up a terminal and login to gcloud using the command: ~$ gcloud auth login
- this will allow you to login only once for all future gcloud
commands.
To deploy a Kubernetes cluster to GCP, we first need to create a new GCP project, add a billing account to it, enable the container API's and upload our docker image:
We also need a Terraform service account with the necessary rights to create the Kubernetes cluster in the GCP project.
Terraform Deep Dive
Now we're all setup, we can start defining our infrastructure as code using Terraform.
In Terraform, several concepts exist:
- Providers: a provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services
- Resources: resources are the most important element in the Terraform language. Each resource block describes one or more infrastructure objects, such as virtual networks, compute instances
- Variables: a variable can have a default value. if you omit the default value, Terraform will ask you to provide it when running a terraform command
- Modules: a module is nothing more than a folder which combines related terraform files
- Outputs: sometimes a variable is needed which is only known after terraform has done a change on a cloud provider — think of ip-addresses that are given to your application. An output takes that value and exposes it to your variables
In Terraform, you can organize your code the anyway you like it - Terraform itself figures out how to deploy it. In this tutorial, we will use two modules:
- gke module: this module is responsible for setting up a Kubernetes Cluster and a Postgres CloudSql instance.
- k8s module: this module will deploy our application to the Kubernetes Cluster and expose it to the internet via a service.
Our entry point in Terraform is the main.tf configuration file. Next to it, are two directories: gke
and k8s
. The final directory layout is as follows:
- gke
- variables.tf
- gcp.tf
- cluster.tf
- cloudsql.tf
- k8s
- variables.tf
- k8s.tf
- deployments.tf
- services.tf
- main.tf
Entrypoint for Terraform - main.tf
main.tf
is the entry point in our infrastructure as code.
GKE Module
Our GKE module will create a container cluster on Google Cloud and provision a Postgres Cloud Sql instance. We start by defining some variables that can then be used in the other Terraform files.
k8s Module
The k8s module will deploy our docker image we created earlier on and provide it with the environment variables to connect to the Postgres Cloud Sql instance. It will also create a Kubernetes service to expose the application via an Ingress load-balancer to the internet.
We again start with the variables that can be used in the other Terraform files from the k8s module.
Deploy Time!
We now can use Terraform commands to provision our application to the Google Cloud Platform. Make sure you are in the directory which contains the main.tf
file and the gke
and k8s
folders when issuing the following commands:
After you run the terraform apply
command you have to wait... typical deploy time is about 5 minutes.
After the deployment succeeds, we can query kubernetes to find out the public ip-address.
Testing Time...
Since the salary slip microservice is now available on the internet, we can test it. First, we will create 10.000 employees in our database. To do so, fire up your favorite browser and go to the url http://${public-ip-from-the-service}:8080/create-employees?amount=10000
. This takes about 15 seconds.
Now, visit the JobRunr dashboard - you can find it at http://${public-ip-from-the-service}:8000/dashboard
. Navigate to the Recurring jobs tab and trigger the 'Generate and send salary slip to all employees' job. After about 15 seconds, you should have 10.000 enqueued jobs. Let's measure how long it takes to process them...
Scale it up!
Now, let's add 10 instances of our application to the cluster by changing the replica attribute in the deployment.tf
file.
We now apply this change again using the Terraform apply command:~/jobrunr/gcloud$ terraform apply
If you run the command ~/jobrunr/gcloud$ kubectl get pods
you will now see 10 pods running our JobRunr application. Let's trigger the 'Generate and send salary slip to all employees' recurring job again and wait for it to finish.
It only took 1.292 seconds or 21 minutes and 30 seconds!
To keep your free credit for GCP, do not forget to issue the command terraform destroy
. It will stop all pods, remove the Kubernetes cluster and delete the Postgres Cloud Sql instance.
Conclusion
JobRunr can easily scale horizontally and allows to distribute all long-running background jobs over multiple instances without any change to the Java code. In an ideal world, we would have seen a 900% speed increase instead of the 869% we see now as we added 9 extra pods. As JobRunr only performs each job only once, there is some overhead when pulling jobs from the queue explaining the difference.
Learn more
I hope you enjoyed this tutorial and you can see the benefits of JobRunr, Terraform, and Kubernetes — it allows you to easily scale horizontally and distribute all long-running background jobs over multiple instances without any change to the Java code.
To learn more, check out these guides:
- JobRunr — Java batch processing made easy...
- Terraform — Provision servers in the cloud with Terraform
- Kubernetes — Getting started with Kubernetes
- Jib — Create fast and easy docker images with Jib
Published at DZone with permission of Ronald Dehuysser. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments