Microservices With Apache Camel and Quarkus
This post proposes a microservices deployment model based on Camel, using a Java development stack, Quarkus as a runtime, and K8s as a cloud-native platform.
Join the DZone community and get the full member experience.
Join For FreeApache Camel is everything but a new arrival in the area of the Java enterprise stacks. Created by James Strachan in 2007, it aimed at being the implementation of the famous "EIP book" (Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf, published by Addison Wesley in October 2003). After having become one of the most popular Java integration frameworks in early 2010, Apache Camel was on the point of getting lost in the folds of history in favor of a new architecture model known as Enterprise Service Bus (ESB) and perceived as a panacea of the Service Oriented Architecture (SOA).
But after the SOA fiasco, Apache Camel (which, meanwhile, has been adopted and distributed by several editors including but not limited to Progress Software and Red Hat under commercial names like Mediation Router or Fuse) is making a powerful comeback and is still here, even stronger for the next decade of integration. This comeback is also made easier by Quarkus, the new supersonic and subatomic Java platform.
This article aims at proposing a very convenient microservices implementation approach using Apache Camel as a Java development tool, Quarkus as a runtime, and different Kubernetes (K8s) clusters - from local ones like Minikube to PaaS like EKS (Elastic Kubernetes Service), OpenShift, or Heroku - as the infrastructure.
The Project
The project used here in order to illustrate the point is a simplified money transfer application consisting of four microservices, as follows:
aws-camelk-file
: This microservice is polling a local folder and, as soon as an XML file is coming in, it stores it in a newly created AWS S3 bucket, which name starts withmys3
followed by a random suffix.aws-camelk-s3
: This microservice is listening on the first found AWS S3 bucket, which name starts withmys3
. As soon as an XML file comes in, it splits, tokenized, and streams it, before sending each message to an AWS SQS (Simple Queue Service) queue, which name ismyQueue
.aws-camelk-sqs
: This microservice subscribes for messages to the AWS SQS queue namedmyQueue
and, for each incoming message, unmarshals it from XML to Java objects, then marshals it to JSON format, before sending it to the REST service below.aws-camelk-jaxrs
: This microservice exposes a REST API having endpoints for CRUD-ing money transfer orders. It consumes/produces JSON input/output data. It uses a service that exposes an interface defined by theaws-camelk-api
module. Several implementations of this interface might be present but, for simplicity's sake, in the current case, we're using the one defined by theaws-camelk-provider
module namedDefaultMoneyTransferProvider
, which only CRUDs the money transfer order requests in an in-memory hash map.
The project's source code may be found here. It's a multi-module Maven project and the modules are explained below. The most important Maven dependencies are shown below:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.quarkus.platform</groupId>
<artifactId>quarkus-bom</artifactId>
<version>${quarkus.platform.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>io.quarkus.platform</groupId>
<artifactId>quarkus-camel-bom</artifactId>
<version>${quarkus.platform.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.12.454</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
The Module aws-camelk-model
This module defines the application's domain which consists of business objects like MoneyTransfer
, Bank
, BankAddress
, etc. One of the particularities of the integration applications is the fact that the business domain is legacy and, generally, designed decades ago by business analysts and experts ignoring everything about the tool-set that you, as a software developer, are using currently. This legacy takes various forms, like Excel sheets and CSV or XML files.
Hence we consider here the classical scenario according to which our domain model is defined as an XML grammar, defined by a couple of XSD files. These XSD files are in the src/main/resources/xsd
directory and are processed by the jaxb2-maven-plugin
in order to generate the associated Java objects. The listing below shows the plugin's configuration:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jaxb2-maven-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.jvnet.jaxb2_commons</groupId>
<artifactId>jaxb2-value-constructor</artifactId>
<version>3.0</version>
</dependency>
</dependencies>
<executions>
<execution>
<goals>
<goal>xjc</goal>
</goals>
</execution>
</executions>
<configuration>
<packageName>fr.simplex_software.quarkus.camel.integrations.jaxb</packageName>
<sources>
<source>${basedir}/src/main/resources/xsd</source>
</sources>
<arguments>
<argument>-Xvalue-constructor</argument>
</arguments>
<extension>true</extension>
</configuration>
</plugin>
Here, we're running the xjc
schema compiler tool to generate Java classes in the target package fr.simplex_software.quarkus.camel.integrations.jaxb
based on the XSD schema present in the project's src/main/resources/xsd
directory. By default, these automatically generated Java objects having JAXB (Java Architecture for XML Binding) annotations don't have constructors, which makes them a bit hard to use, especially for classes with lots of properties that must be instantiated via setters. Accordingly, in the listing above, we configure the jaxb2-maven-plugin
with a dependency to the jaxb3-value-constructor
artifact. In doing that, we ask the xjc
compiler to generate full argument constructors for every subsequent JAXB processed class.
The final result of this module is a JAR file containing our domain model in the form of a Java class hierarchy that will be used as a dependency by all the other application's modules. This method is much more practical than the one consisting of manual implementation (again, in Java) of the domain object that is already defined by the XML grammar.
The Module aws-camelk-api
This module is very simple as it only consists of an interface. This interface, named MoneyTransferFacade
, is the one exposed by the money transfer service. This service has to implement the exposed interface. In practice, such a service might have many different implementations, depending on the nature of the money transfer, the bank, the customer type, and many other possible criteria. In our example, we only consider a simple implementation of this interface, as shown in the next section.
The Module aws-camelk-provider
This module defines the service provider for the MoneyTransferFacade
interface. The SPI (Software Provider Interface) pattern used here is a very powerful one, allowing to decouple the service interface from its implementation.
Our implementation of the MoneyTransferFacade
interface is the class DefaultMoneyTransferProvider
and it's also very simple as it only CRUDing the money transfer orders in an in-memory hash map.
The Module aws-camelk-jaxrs
As opposed to the previous modules which are only common class libraries, this module and the next ones are Quarkus runnable services. This means that they use the quarkus-maven-plugin
in order to create an executable JAR.
This module, as its name implies, exposes a JAX-RS (Java API for RESTfull Web Services) API to handle money transfer orders. Quarkus comes with RESTeasy, a full implementation by Red Hat of the JAX-RS specifications, and this is what we're using here.
There is nothing special to mention as far as the class MoneyTransferResource
is concerned, which implements the REST API. It offers endpoints to create, read, update, and delete money transfer orders and, additionally, two endpoints that aim at checking the application's aliveness and readiness.
The Module aws-camelk-file
This module is the first one in the Camel pipeline, consisting of conveying XML files containing money transfer orders from their initial landing directory to the REST API, which processes them on behalf of the service provider. It uses Camel Java DSL (Domain Specific Language) for doing that, as shown in the listing below:
fromF("file://%s?include=.*.xml&delete=true&idempotent=true&bridgeErrorHandler=true", inBox)
.doTry()
.to("validator:xsd/money-transfers.xsd")
.setHeader(AWS2S3Constants.KEY, header(FileConstants.FILE_NAME))
.to(aws2S3(s3Name + RANDOM).autoCreateBucket(true).useDefaultCredentialsProvider(true))
.doCatch(ValidationException.class)
.log(LoggingLevel.ERROR, failureMsg + " ${exception.message}")
.doFinally()
.end();
This code polls an input directory, defined as an external property, for the presence of any XML file (files having the .xml extension). Once such a file lands in the given directory, it is validated against the schema defined in the src/main/resources/xsd/money-transfers.xsd
file. Should it be valid, it is stored in an AWS S3 bucket whose name is computed as being equal to an externally defined constant followed by a random suffix. Everything is encapsulated in a try...catch
structure to consistently process the exception cases.
Here, in order to define external properties, we use the Eclipse MP Configuration specs (among others) implemented by Quarkus, as shown in the listing below:
private static final String RANDOM = new Random().ints('a', 'z')
.limit(5)
.collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append)
.toString();
@ConfigProperty(name="inBox")
String inBox;
@ConfigProperty(name="s3Name")
String s3Name;
The RANDOM
suffix is generated on behalf of the java.util.Random
class and the properties inBox
and s3Name
are injected from the src/resource/application.properties
file. The reason for using an S3 bucket name composed from a constant and a random suffix is that AWS S3 buckets need to have cross-region unique names and, accordingly, we need such a random suffix in order to guarantee unicity.
The Module aws-camelk-s3
This module implements a Camel route which is triggered by the AWS infrastructure whenever a file lands in the dedicated S3 bucket. Here is the code:
from(aws2S3(s3BucketName).useDefaultCredentialsProvider(true))
.split().tokenizeXML("moneyTransfer").streaming()
.to(aws2Sqs(queueName).autoCreateQueue(true).useDefaultCredentialsProvider(true));
Once triggered, the Camel route splits the input XML file after having tokenized it, order by order. The idea is that an input file may contain several money transfer orders and these orders are to be processed separately. Hence, each single money transfer order issued from this tokenizing and splitting process is sent to the AWS SQS queue, which name is given by the value of the queueName
property, injected from the application.properties
file.
The Module aws-camelk-sqs
This is the last module of our Camel pipeline.
from(aws2Sqs(queueName).useDefaultCredentialsProvider(true))
.unmarshal(jaxbDataFormat)
.marshal().json(JsonLibrary.Jsonb)
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.to(http(uri));
This Camel route subscribes to the AWS SQS queue whose name is given by the queueName
property and it unmarshals each XML message it receives to Java objects. Given that each XML message contains a money transfer order, it is unmarshaled in the correspondent MoneyTransfer
Java class instance. Then, once unmarshalled, each MoneyTransfer
Java class instance is marshaled again into a JSON payload. This is required because our REST interface consumes JSON payloads, and, as opposed to the standard JAX-RS client, which is able to automatically perform conversions from Java objects to JSON, the http()
Camel component used here isn't. Hence, we need to do it manually. By setting the exchange's header to the POST constant, we set the type of HTTP request that will be sent to the REST API. Last but not least, the endpoint URI is, as usual, injected as an externally defined property, from the application.properties
file.
Unit Testing
Before deploying and running our microservices, we need to unit test them. The project includes a couple of unit tests for almost all its modules - from aws-camelk-model
, where the domain model is tested, as well as its various conversion from/to XML/Java, to aws-camelk-jaxrs
, which is our terminus microservice. In order to run the unit test, it's simple. Just execute:
$ cd aws-camelk
$ ./delete-all-buckets.sh #Delete all the buckets named "mys3*" if any
$ ./purge-sqs-queue.sh #Purge the SQS queue named myQueue if it exists and isn't empty
$ mvn clean package #Clean-up, run unit tests and create JARs
A full unit test report will be displayed by the maven-surefile-plugin
. In order that the unit tests run as expected, an AWS account is required and the AWS CLI should be installed and configured on the local box. This means that, among others, the file ~/.aws/credentials
contains your aws_access_key
_id
and aws_secret_access_key properties
with their associated values.
The reason is that the unit tests use the AWS SDK (Software Development Kit) to handle S3 buckets and SQS queues, which makes them not quite unit tests but, rather, a combination of unit and integration tests.
Deploying and Running
Now, to deploy and run our microservices, there are many different scenarios that we have to consider - from the simple local standalone execution to PaaS deployments like OpenShift or EKS passing through local K8s clusters like Minikube. Accordingly, in order to avoid some confusion here, we have preferred to dedicate a separate post to each such deployment scenario.
So stay close to your browser to see where the story takes us next.
Opinions expressed by DZone contributors are their own.
Comments