How to Move IBM App Connect Enterprise to Containers (Part 4a)
Creating a Queue Manager in OpenShift From the Command Line
Join the DZone community and get the full member experience.
Join For FreeThis blog is part of a series, link back to Part 3 to explore the earlier articles.
In the following scenarios, integrations with IBM MQ are used. When we move IBM App Connect into containers, the recommendation is that wherever possible, we should move to working with MQ remotely. This is highly preferable to having a local MQ server residing alongside App Connect since it allows Kubernetes to fully take over the administration of scaling and high availability of the components. This is part of a broader common practice around containerization, noting that each container should have only one job.
With this in mind, our integrations are going to need a remote queue manager to talk to when we migrate them into the container world. In this post, we’re going to look at how to stand up an IBM MQ Queue Manager in a container on our Kubernetes environment. For consistency with our other scenarios, we will choose the OpenShift container platform, although it is important to say that IBM MQ is supported on all major Kubernetes environments.
Note that this is a simplistic deployment, purely to provide a remote queue manager for our App Connect scenarios, not an example of a proper production deployment. For example, we will not explore high availability and security, nor will we assign persistent storage to it.
We can create our containerized queue manager in one of two ways: via the command line, and via a web console. In this post, we will describe the command line option.
The IBM MQ Operator
As with App Connect, IBM has created an 'operator' based on the Operator Framework to simplify deployment and running of IBM MQ Queue Managers in OpenShift.
The MQ Operator performs a number of functions. The most important in this scenario is to allow OpenShift to work with a new type of object called a QueueManager. The QueueManager object, unsurprisingly, looks after an IBM MQ Queue Manager running in a container.
In this scenario, we will provide Kubernetes with a definition file (known as a Custom Resource Definition, or CRD) that defines the QueueManager object we wish to create. The MQ Operator will automatically notice that new definition and create a queue manager container with the queues and channels requested.
The IBM MQ Operator is installed in much the same way as the IBM App Connect operator and only needs to be done once on a cluster. If you have already installed the Cloud Pak for Integration, then you should already have the operator in your catalog. If not, please refer to the instructions for installing an IBM MQ Operator in IBM documentation:
Installing and uninstalling the IBM MQ Operator on Red Hat OpenShift
Configuring a Queue Manager With a ConfigMap
Queue managers can be declaratively configured using two key files
- mqsc files to define their queues and channels
- ini files to define properties about the queue manager itself
In our simple deployment the only thing we need to do is provide the definition for a queue and a channel, so we only need an mqsc file but the process for an ini file is largely identical.
The IBM MQ Operator is specifically designed to retrieve configuration information from OpenShift on startup via a ConfigMap or Secret. These are created in the namespace (project) where you will deploy the queue manager, and referenced in the queue manager definition.
ConfigMaps and Secrets in Kubernetes
When a container image is started up in Kubernetes, we often want to pass some key information to it such as some properties pertaining to how it should run, or some credentials it will need. The Kubernetes objects ConfigMap and Secret are designed specifically for this purpose.
The two are very similar, and the primary difference between them is that Secrets, as you might guess, are more suited to sensitive information. Secrets are base64 encoded and there are some differences in the way that they are handled.
It should be noted that the basic Secrets facility in Kubernetes is often not sufficiently secure on its own. However, it is a pluggable architecture that can be enhanced through third-party “vault” software such as that provided by HashiCorp, or those provided natively by the major cloud providers.
A ConfigMap or Secret can be populated on the Kubernetes environment prior to a container being deployed, then on startup, the values from it can be pulled into the container either as environment variables or mounted files. This technique provides a way to provide different values in each environment a container is deployed – e.g. different credentials in pre-production and production. It also enables us to use one standard container image for many different purposes, which is how we will be using it in this post.
In our scenario, we will use a ConfigMap to supply the queue definitions to a standard queue manager container.
Create a ConfigMap to Define the Queues and Channels on the Queue Manager
In our case we would like to create a queue and also create a SVRCONN channel to enable App Connect to put and get messages from it and we are going to provide that via a Kubernetes ConfigMap.
Our mqsc definition looks like this:
DEFINE QLOCAL('BACKEND') REPLACE
DEFINE CHANNEL('ACECLIENT') CHLTYPE (SVRCONN) SSLCAUTH(OPTIONAL)
ALTER QMGR CHLAUTH(DISABLED)
REFRESH SECURITY
It says to create a queue named ‘BACKEND’, replacing any queue of that name currently present. It then defines a channel of type ‘SVRCONN’ (the type used by clients to talk to a queue manager) named ‘ACECLIENT’. For convenience the channel security is switched off. Clearly we would make different choices for a production queue manager, but this will do for our purposes.
First we need to create a definition file for the ConfigMap, then we’ll deploy it with the Kubernetes (OpenShift) command line.
Create a ConfigMap definition file that looks like this with our mqsc instructions embedded within it:
apiVersion: v1
kind: ConfigMap
metadata:
name: mqsc-example
namespace: mq
data:
example1.mqsc: |
DEFINE QLOCAL('BACKEND') REPLACE
DEFINE CHANNEL('ACECLIENT') CHLTYPE (SVRCONN) SSLCAUTH(OPTIONAL)
ALTER QMGR CHLAUTH(DISABLED)
REFRESH SECURITY
Save this YAML in a file ConfigMap-mqsc-example.yaml
Now create the ConfigMap using the following command:
# oc apply -f ConfigMap-mqsc-example.yaml
That’s now done. The mqsc information is stored in the mq namespace of the Kubernetes cluster, ready for the container to use it when it starts up.
Should you want to list the configMaps in your environment, run the following command:
# oc get ConfigMap
NAME DATA AGE
mqsc-example 1 5m
To view the contents of ConfigMap, run the following command:
# oc describe ConfigMap mqsc-example
Deploying a Queue Manager Using OCP CLI
We now need to create a definition file for our MQ Operator describing what we want it to deploy.
The following will instruct the operator to deploy an IBM MQ container using the mqsc definitions in the ConfigMap we just created.
apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
namespace: mq
name: quickstart-cp4i
spec:
license:
accept: true
license: L-APIG-BMJJBM
use: Production
web:
enabled: true
version: 9.2.1.0-r1
template:
pod:
containers:
- env:
- name: MQSNOAUT
value: 'yes'
name: qmgr
queueManager:
name: QUICKSTART
mqsc:
- configMap:
name: mqsc-example
items:
- example1.mqsc
storage:
queueManager:
type: ephemeral
The container will have a queue manager called “QUICKSTART” and will use the IBM MQ container image that has version 9.2.1.0-r1 of the product binaries. For simplicity, it will have ephemeral (non-persistent) storage, and security will be turned off. It will then draw in and create the queue and channels we defined in the ConfigMap we created earlier.
Save the above YAML in a file mq-quickstart.yaml
Now, create the queue manager using the command:
# oc apply -f mq-quickstart.yaml
You may recall from Scenario 3 that Kubernetes always deploys containers within a 'pod'. The operator has taken care of this detail for you, and you can now check the status of pod using the following standard Kubernetes command. You can see that the Operator has auto-generated a name for the pod (quickstart-cp4i-ibm-mq-0) based on the metadata.name field in our definition file above.
# oc get pods
NAME READY STATUS RESTARTS AGE
quickstart-cp4i-ibm-mq-0 1/1 Running 0 4m22s
The IBM MQ container knows how to report not just the status of the container, but also the queue manager running inside it. As such, when the above command reports a status of ‘Running’, this in fact indicates that the queue manager has been successfully deployed and started.
The mq-quickstart.yaml definition we just deployed instructed the MQ Operator to create an object of the kind QueueManager, which represents the underlying container. We can talk to this object directly from the Kubernetes command line to query the running queue manager’s status and configuration. For example:
# oc get QueueManager
NAME PHASE
quickstart-cp4i Running
Once again, the status field (‘Running’) indicates that the queue manager has been successfully deployed.
For more information see: Operating IBM MQ using the IBM MQ Operator
Communicating With the Queue Manager via a Kubernetes 'Service'
We’ve seen that the queue manager is running, but how would you connect to it to GET and PUT messages?
For the moment we’ll just consider the situation where we would like to connect to MQ from another container in the same Kubernetes cluster, such as an App Connect container for example. This will suffice for the next few scenarios in our series, but it is of course also possible to expose MQ beyond the cluster, and that is explored in the MQ documentation center.
When the Operator instantiated our MQ container in a pod, it also created a Kubernetes object called a “Service”. Put simply, a Service is a logical hostname that can be used to talk to containers in a pod within the same Kubernetes cluster. The Service provides a level of indirection such that pods can be restarted, moved between Kubernetes worker nodes. Calls to the Service will be routed to wherever the running pod is located. This is a great example of the benefit that Kubernetes brings as it performs all the routing on your behalf, dynamically adjusting it as necessary.
We can obtain the service names (and port numbers) within the mq-demo namespace using the following command:
# oc get services -n mq
NAME TYPE CLUSTER-IP PORT(S)
quickstart-cp4i-mq-ibm-mq ClusterIP 172.30.189.143 9443/TCP,1414/TCP
From this, we can note that:
- Service name: quickstart-cp4i-mq-ibm-mq
Port Number: 1414
Now, we have all the information on the queue manager side that we need to use with ACE flows, or indeed any other MQ Client within the cluster.
- Queue manager name: QUICKSTART
- Queues for applications to PUT and GET messages: BACKEND
- Channel for communication with the application: ACECLIENT
- MQ hostname and listener port: quickstart-cp4i-mq-ibm-mq/1414
Should You Use Runmqsc Against a Queue Manager in a Container?
Whilst we would encourage you to administer queue managers running in containers using the Operator as described above, viewing queue manager properties via the QueueManager object, and making changes by re-applying the definition file.
However, it is still possible to connect to them the way you may be used to today, using the runmqsc shell and running commands from within that.
We would strongly discourage making changes to running containers via runmqsc as this would result in the live configuration of the queue manager no longer matching the definition files stored in your source code repository – this causes 'configuration drift' and risks operational instability and unpredictability. Any changes you make live would be overwritten the next time you applied the definition files. It is fundamental to a good cloud-native approach that the definitions in the source code repository are considered the master configuration.
Runmqsc could potentially be used to view the queue manager configuration directly. To do this you would have the MQ CLI installed on your local machine, and potentially set up TLS, etc. You would then point the CLI at the same Kubernetes 'Service' used for PUT and GET, and this will route your request through to the queue manager container.
Another alternative is to open a remote shell session with the container. The advantage here is that you do not have to have MQ installed and configured locally as you are using the runmqsc installed in the container. However, to do this you will need to find out the specific name of the pod, rather than allowing the Kubernetes Service to do the routing for you.
As an example, to confirm if the Queue manager has picked up the ConfigMap you can log in to the queue manager pod and check that the queues and channels are created as defined in the ConfigMap.
# oc rsh quickstart-cp4i-ibm-mq-0
sh-4.4$ runmqsc
5724-H72 (C) Copyright IBM Corp. 1994, 2020.
Starting MQSC for queue manager QUICKSTART.
1 : dis qmgr
AMQ8408I: Display Queue Manager details.
QMNAME(QUICKSTART) ACCTCONO(DISABLED)
CHLAUTH(DISABLED) CLWLDATA( )
dis ql(BACKEND)
2 : dis ql(BACKEND)
AMQ8409I: Display Queue details.
QUEUE(BACKEND) TYPE(QLOCAL)
dis channel(ACECLIENT)
3 : dis channel(ACECLIENT)
AMQ8414I: Display Channel details.
CHANNEL(ACECLIENT) CHLTYPE(SVRCONN)
Whilst it’s nice to know the above is possible, and it might just be useful in some edge cases around diagnostics, let’s reinforce the message that this should not be your primary approach.
You should aim to work through the Kubernetes command line using the Operator. This will enable you to view your queue managers configurations using the new Kubernetes QueueManager object without having to concern yourself with which Kubernetes Service, or indeed Pod they are located.
Furthermore, you should maintain your queue managers by applying updates to the Operator using their definition files, enabling you to have a single audit trail and source of truth regarding the queue manager configuration.
Automating Queue Manager Creation and Maintenance
We’ve taken some trouble to explain quite a lot of concepts along the way, but the actual actions we took to create the queue manager, were really just the following two commands.
# oc apply -f ConfigMap-mqsc-example.yaml
# oc apply -f mq-quickstart.yaml
We just needed to know what information to put into the ConfigMap (the mqsc definition) and QueueManager files. It’s then easy to see how queue manager creation could be automated via a simple pipeline, and, for example, hooked up to definition files stored in a source code repository.
This “GitOps” approach has many benefits. For example, to change to a new version of the underlying MQ runtime would be performed by simply changing the version field in the mq-quickstart.yaml definition file. Once that change was committed in source control, this would kick off the pipeline, which would pass that new definition file to the IBM MQ Operator. The Operator would then take on the job of using standard Kubernetes mechanisms to introduce updated containers for the existing queue managers with minimal downtime.
Acknowledgement and thanks to Kim Clark for providing valuable technical input to this article.
Published at DZone with permission of Amar Shah. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments