Microservices With Apache Camel and Quarkus (Part 3)
In Parts 1 and 2, you've seen how to run microservices as Quarkus local processes. Let's now look at some K8s-based deployments, starting with Minikube.
Join the DZone community and get the full member experience.
Join For FreeMinikube: it's probably the simplest and the most approachable K8s cluster. As a lightweight K8s distribution designed for the purpose to run with low resources, an effective Minikube setup doesn't require anything else than your own laptop. From this perspective, Minikube is a great choice for development environments, able to give quick access to infrastructure elements like nodes, pods, deployments, services, and other K8s subtleties, more difficult to implement in a full-scale scenario.
As a K8s-native runtime, Quarkus supports various types of clusters, including but not limited to Minikube, Kind, OpenShift, EKS (Elastic Kubernetes Service), AKS (Azure Kubernetes Service), etc.
Packaging to Docker Images
Quarkus offers several choices to package a cloud-native application based on different techniques and tools, as follows:
In our prototype, we're using Jib, which, as opposed to the other two methods, has the advantage of not requiring a Docker daemon running on the host machine. In order to take advantage of it, just include the following Maven dependency in the master pom.xml
file:
...
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-jib</artifactId>
</dependency>
...
Run a new build:
mvn -DskipTests -Dquarkus.container-image.build=true clean package
When finished, if there is a Docker daemon running locally, the container image creation may be checked as follows:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aws-quarkus/aws-camelk-sqs 1.0.0-SNAPSHOT 63072102ba00 9 days ago 382MB
aws-quarkus/aws-camelk-jaxrs 1.0.0-SNAPSHOT 776a0f99c5d6 9 days ago 387MB
aws-quarkus/aws-camelk-s3 1.0.0-SNAPSHOT 003f0a987901 9 days ago 382MB
aws-quarkus/aws-camelk-file 1.0.0-SNAPSHOT 1130a9c3dfcb 9 days ago 382MB
...
Deploying to Minikube
Our Apache Camel microservices don't require any modification or refactoring in order to be deployed to Minikube. However, the build process consisting of all the steps necessary for testing, packaging, and deploying the application to K8s is to be adapted to become cloud-aware and to take advantage of the Minikube peculiarities.
Hence, the first modification that we need to tweak is to add the quarkus-minikube
Maven artifact to our master pom.xml
file, as shown below:
...
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-minikube</artifactId>
</dependency>
...
This artifact will generate Minikube-specific manifest files in the project's target/kubernetes
directory. As everyone knows, everything that K8s is about is described in YAML (Yet Another Markup Language) notation. And while K8s historically requires a quite heavy YAML authoring and editing process, using this artifact has the advantage of automatically generating the required YAML files or, at least, a base skeleton that might be enriched later.
Performing a new build by running the mvn -DskipTests clean install
command at the project's root level will produce two categories of files in the target/kubernetes
directory for each Quarkus microservice:
- A kubernetes.yaml/json pair of files containing the manifest describing the microservice's K8 general resources
- A minikube.yaml/json pair of files containing the manifest describing the microservice's Minikube-specific resources
For example, for the aws-camelk-jaxrs
microservice, going to aws-camelk-jaxrs/target/kubernetes
and opening the minikube.yaml
file, you'll see that:
...
spec:
ports:
- name: http
nodePort: 30326
port: 80
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: aws-camel-jaxrs
app.kubernetes.io/part-of: aws-camelk
app.kubernetes.io/version: 1.0.0-SNAPSHOT
type: NodePort
...
This manifest fragment defines a K8s service of the type NodePort
listening for HTTP requests on the TCP port number 8080, mapped to the host's port number 30326. This configuration is Minikube specific as for other clusters like EKS, the type of the configured K8s service would be ClusterIP
instead of NodePort
. The selector paragraph defines the service name, version, and package, customized via the following properties:
...
quarkus.kubernetes.part-of=aws-camelk
quarkus.kubernetes.name=aws-camel-jaxrs
quarkus.kubernetes.version=1.0.0-SNAPSHOT
...
Another important point to notice is the AWS credentials definition. Our microservices need access to AWS and, for that purpose, some properties like the region name, the access key id, and value should be defined. While the region name isn't a piece of sensible information and may be defined as a clear text property, this isn't the case for the access key-related properties, which require to use K8s secrets. The following listing shows a fragment of the application.properties
file:
...
quarkus.kubernetes.env.vars.aws_region=eu-west-3
quarkus.kubernetes.env.secrets=aws-secret
...
Here, the region name is defined as being eu-west-3
in plain text, while the AWS access key credentials are defined via a K8S secret named aws-secret
.
Running on Minikube
We just have reviewed how to refactor our Maven-based build process in order to adapt it to K8s native. In order to run the microservices on Minikube, proceed as follows:
Start Minikube
Minikube should be installed, of course, on your box. That's a very easy operation; just follow the guide here. Once installed, you need to start Minikube:
$ minikube start
...
$ eval $(minikube -p minikube docker-env)
After starting Minikube, the last command in the listing above sets the K8s Docker registry to the instance running in Minikube such that the newly generated images be published directly to it.
Clone the Project From GitHub
Run the following commands to clone the repository:
$ git clone https://github.com/nicolasduminil/aws-camelk.git
$ cd aws-camelk
$ git checkout minikube
Create a K8s Namespace and Secret
Run the following commands to create the K8s namespace and secret:
$ kubectl create namespace quarkus-camel
$ kubectl apply -f aws-secret.yaml --namespace quarkus-camel
Here, after creating a K8s namespace named quarkus-camel
, we create a K8s secret in this same namespace by applying the config in the manifest file named aws-secret.yaml
, as shown below:
apiVersion: v1
kind: Secret
metadata:
name: aws-secret
type: Opaque
data:
AWS_ACCESS_KEY_ID: ...
AWS_SECRET_ACCESS_KEY: ...
The properties labeled AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are BASE64 encoded.
Start the Microservices
The same script (start-ms.sh) that we used in Part 2 in order to start the microservices may be used again for the same purposes. It has been modified, as shown below:
#!/bin/sh
./delete-all-buckets.sh
./create-queue.sh
mvn -DskipTests -Dquarkus.kubernetes.deploy clean package
sleep 3
./copy-xml-file.sh
Here, we start by cleaning up the environment and deleting all the S3 buckets named "mys3*", if any. Then we create an SQS queue named "myQueue" if it doesn't exist already. If it exists, we purge it by removing all the messages stored in it.
The Maven command uses the property quarkus.kubernetes.deploy
such that to deploy to Minikube the newly generated Docker images. Last but not least, copying an XML file into the input directory will trigger the Camel route named aws-camelk-file
, which starts the pipeline.
Observe the Log Files
In order to follow the microservices execution, run the commands below:
$ kubectl get pods --namespace quarkus-camel
$ kubectl logs <pod-id> --namespace quarkus-camel
Stop the Microservices
In order to stop the microservices, run the command below:
./kill-ms.sh
Clean up the AWS Infrastructure
Don't forget to clean up your AWS infrastructure such that to avoid being billed.
$ ./delete-all-buckets.sh
$ ./purge-sqs-queue.sh
$ ./delete-sqs-queue.sh
Stop Minikube
Last but not least, stop Minikube:
$ eval $(minikube -p minikube docker-env --unset)
$ minikube stop
Enjoy!
Previous Posts in This Series:
Opinions expressed by DZone contributors are their own.
Comments