Migrate Standalone HornetQ Configuration to ActiveMQ Cluster
Learn more about migrating a standalone HornetQ configuration to an ActiveMQ cluster.
Join the DZone community and get the full member experience.
Join For FreeActiveMQ is currently in major version 5, minor version 15. There's also a separate product called Apache ActiveMQ Artemis, which is a new JMS Broker based on the HornetQ codebase, previously owned by Red Hat, and brings the broker's JMS implementation up to the 2.0 specification.
Recently, we decided to migrate from HornetQ to ActiveMQ but we could not find such a migration article. The reasons that forced me to do this migration are:
- ActiveMQ is already supported by Docker and several open-source examples are out there that can run an AMQ container in minutes. On the other hand, HornetQ is so old and unsupported that we will have to write our own Dockerfiles, wrapper scripts, etc. Why do we need Dockers though? Because we have some HornetQ instances that have been installed by hand in both staging and production systems and are not easily manageable — at all.
- In addition, the first reason is a good chance to run all the JMS servers in a Kubernetes cluster.
- We would like to use the ActiveMQ clustering feature. Previous experiments proved that HornetQ clustering was not that stable.
- HornetQ does not have any embedded dashboard for configuration management & monitoring, like ActiveMQ.
- Last but not least, Apache ActiveMQ has the largest number of installations of all open-source message brokers with the largest distribution.
Therefore, I'm writing this tutorial in order to help others who are very skeptical about whether or they should do this kind of upgrade.
Configuration
First of all, let's see which configuration files of HornetQ are in our best interest to migrate:
config/hornetq-jms.xml: Defines all the queues and topics.
config/hornetq-configuration.xml: Defines the broker configuration.
The above configuration could be migrated easily to broker.xml of ActiveMQ. Let's see some specific examples:
Queues/Topics
Here is an example of a queue and a topic in both HornetQ and ActiveMQ.
HornetQ (hornetq-jms.xml) | ActiveMQ (broker.xml) |
|
|
Security Settings
Although ObjectMessage
usage is generally discouraged, as it introduces coupling of classpaths between producers and consumers, ActiveMQ supports them as part of the JMS specification.
ObjectMessage
objects depend on Java serialization of marshal/unmarshal object payload. This process is generally considered unsafe as the malicious payload can exploit the host system. That’s why starting with versions 5.12.2 and 5.13.0, ActiveMQ enforces users to explicitly whitelist packages that can be exchanged using ObjectMessages
.
If you need to exchange object messages, you need to add packages your applications are using. You can do that by using org.apache.activemq.SERIALIZABLE_PACKAGES system property, interpreted by the broker and the ActiveMQ client library.
In our applications (both JMS consumers and producers), we decided to add this system property in the wrapper systemd script, which launches all the Linux Java apps:
JAVA_OPTS="-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=* ..."
HornetQ (hornetq-configuration.xml) | ActiveMQ (broker.xml) |
|
|
System Properties
The following system properties are passed during the creation of JNDI Context and then the lookup of ConnectionFactory
:
HornetQ | ActiveMQ |
Given the fact that hornetq-jms.xml includes: connection.factory=/ConnectionFactory |
Be default it is: connection.factory=ConnectionFactory |
userName=$userName password=$password |
|
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory |
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory |
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces |
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces |
java.naming.provider.url=$BROKER_IP:$BROKER_PORT |
java.naming.provider.url=tcp://$BROKER_IP:$BROKER_PORT or for a failover setup: java.naming.provider.url=failover:\(tcp://$BROKER_MASTER_IP:$BROKER_MASTER_PORT,tcp://$BROKER_SLAVE_IP:$BROKER_SLAVE_PORT) |
JMS Settings
The following mappings are needed to be in sync with HornetQ's configuration. In general, all the configuration directives have the same name.
HornetQ | ActiveMQ |
<persistence-enabled>true</persistence-enabled> | It's enabled by default |
<security-enabled>false</security-enabled> |
To disable security completely simply set the security-enabled property to false in the broker.xml file. Please keep in mind no production system, possible no environment at all, should ever disable security. Make sure you read the falacy number one of the falacies of the distributed computing before disabling the security. |
<paging-directory>$ {data.dir:../data} /paging</paging-directory> |
default:/opt/amq/sharedstore/paging |
<bindings-directory>$ {data.dir:../data} /bindings</bindings-directory> |
default:/opt/amq/sharedstore/bindings |
<large-messages-directory>$ {data.dir:../data} /large-messages</large-messages-directory> |
default:/opt/amq/sharedstore/large-messages |
<journal-directory>$ {data.dir:../data} /journal</journal-directory> |
default:/opt/amq/sharedstore/journal |
<journal-min-files>10</journal-min-files> | <core><journal-min-files>2</journal-min-files></core> |
<log-delegate-factory-class-name>org.hornetq.integration.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name> | deprecated the name of the factory class to use for log delegation |
<message-expiry-scan-period>5000</message-expiry-scan-period> | message-expiry-scan-period: How often the queues will be scanned to detect expired messages (in milliseconds, default is 30000ms, set to -1 to disable the reaper thread) |
<message-expiry-thread-priority>3</message-expiry-thread-priority> | message-expiry-thread-priority: The reaper thread priority (it must be between 1 and 10, 10 being the highest priority, default is 3) |
<connection-ttl-override>15000</connection-ttl-override> | If you do not wish clients to be able to specify their own connection TTL, you can override all values used by a global value set on the server side. This can be done by specifying the connection-ttl-override attribute in the server side configuration. The default value for connection-ttl-override is -1 which means "do not override" (i.e. let clients use their own values). |
Address Settings
HornetQ | ActiveMQ |
|
You may use exactly the same settings. |
Launching ActiveMQ Through Docker
Victor Romero has written a Dockerfile for ActiveMQ that gives us the opportunity to launch a cluster of ActiveMQs in minutes:
docker run --name='activemq-master' \
-v /var/artemis-data/etc-override-master:/var/lib/artemis/etc-override \
-v /opt/amq/sharedstore:/var/lib/artemis/data \
-e 'ARTEMIS_USERNAME=admin' \
-e 'ARTEMIS_PASSWORD=admin' \
-e 'ENABLE_JMX=true' \
-e 'JAVA_OPTS=-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=*' \
-p 8161:8161 \
-p 61616:61616 \
-d vromero/activemq-artemis:2.9.0
docker run --name='activemq-slave' \
-v /var/artemis-data/etc-override-slave:/var/lib/artemis/etc-override \
-v /opt/amq/sharedstore:/var/lib/artemis/data \
-e 'ARTEMIS_USERNAME=admin' \
-e 'ARTEMIS_PASSWORD=admin' \
-e 'ENABLE_JMX=true' \
-e 'JAVA_OPTS=-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=*' \
-p 8162:8161 \
-p 61617:61616 \
-d vromero/activemq-artemis:2.9.0
As we can see above, we may run two Docker images (called activemq-master and activemq-slave respectively) on the same or different machines. Let's have a look at the parameters one by one:
Parameter |
Explanation |
-v /var/artemis-data/etc-override-master:/var/lib/artemis/etc-override |
Creates a shared volume between the container and the host machine so that when the container starts it is able to read multiple broker XML files. Multiple files with snippets of configuration can be dropped in the /var/lib/artemis/etc-override volume. Those configuration files must be named following the name convention broker-{{num}}.xml where num is a numeric representation of the snippet. The configuration files will be merged with the default configuration. An alphabetical precedence of the file names will be considered for the merge and in case of collision, the latest change will be treated as final. |
-v /opt/amq/sharedstore:/var/lib/artemis/data |
Creates a shared volume between the container and the host machine so that when the container starts it is able to read and store the JMS data files/folders. This is very useful when you need to share data between master and slave node. If these nodes run on different machines you should share this folder through NFS. |
-e 'ARTEMIS_USERNAME=admin' -e 'ARTEMIS_PASSWORD=admin' |
Username and password for the dashboard at port 8161 for the master node and 8162 for the slave node. |
-e 'ENABLE_JMX=true' |
Due to the JMX's nature, often with dynamics ports for RMI and the need having configured the public IP address to reach the RMI server. It is discouraged to use JMX in Docker. Although in certain scenarios, it could be advisable, as when deploying in a container orchestrator such as Kubernetes or Mesos, and deploying alongside this container a sidecar. For such cases, the following environment variable could be used: ENABLE_JMX. |
-e 'JAVA_OPTS=-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=*' |
Also explained above. Will pass additional java options (for security) to the artemis runtime. |
-p 8161:8161 \ -p 8162:8161 \ |
Master Node listens at: web dashboard 8161, JMS 61616 Slave Node listens at: web dashboard 8162, JMS 61617 |
Of course, there are many other features that we could configure in ActiveMQ, regarding scalability, load balancing, protocols, thread management, and many more. Nevertheless, this article could be very useful especially when you would like to migrate from HornetQ to ActiveMQ quickly, and then, you may do some more fine tuning according to your network topology and your apps requirements.
Opinions expressed by DZone contributors are their own.
Comments