How To Move IBM App Connect Enterprise to Containers - Part 2(a)
Deploy a simple App-Connect toolkit message flow onto Red Hat OpenShift by using the Command Line Interface (CLI).
Join the DZone community and get the full member experience.
Join For FreeScenario 2(a): Deploy a Simple Toolkit Message Flow Onto Red Hat OpenShift Using the Command Line Interface (CLI)
In Scenario 1 we took a simple flow from IBM Integration Bus and demonstrated we could get it running in IBM App Connect Enterprise on an isolated Docker container. This was a good start, but of course, in a real production environment, we would need more than just an isolated container. We would want to be able to create multiple copies of the container in order to provide high availability and to scale up when the incoming workload increases. We would also need to be able to automatically spread the workload across all those container replicas.
This and much more is what is provided by a container orchestration platform, and the most commonly used platform today is of course Kubernetes. In this scenario, we’re going to take that same simple flow and deploy it onto Kubernetes and demonstrate some of these orchestration platform features.
We’re going to use Red Hat OpenShift since it is one of the most widely used and mature Kubernetes platforms. One of the great things about OpenShift is that it provides a consistent experience whether you are using it in a standalone installation on your own infrastructure, or through a managed service.
So you could use a self-installed OpenShift environment, or any of the many managed OpenShift services from IBM, AWS, or Azure and the instructions will be largely identical. OpenShift also brings many other benefits, some of which we’ll discuss as we go through.
The key differences compared to Scenario 1 will be:
- The remote pull of the bar file: In Scenario 1 we were running Docker locally so we could pass the bar file to the container from the local file system. In this scenario, we will show how the container can pull the bar file from a remote location over HTTP.
- Deployment via an Operator: We will use an additional component known as an Operator to help us set up the container in OpenShift. This will perform the vast majority of the underlying work for us, significantly simplifying the deployment.
- Configuration object: We will see our first example of a “configuration” object. In this scenario, it will be the credentials for the HTTP request to retrieve the bar file.
- Deployment using the Kubernetes command-line: We will show how we can use a single standard Kubernetes command to do the deployment.
Accessing an OpenShift Environment
To do this scenario you will need access to a Red Hat OpenShift environment, which is the market-leading productionized implementation of Kubernetes. This could be one you install yourself, although it would probably be easier to simply use a managed environment such as Red Hat OpenShift on IBM Cloud, AWS ROSA, Azure Red Hat OpenShift. In a later post, we will show how to install on a non-OpenShift Kubernetes environment, but we thought we would show OpenShift first as it makes the process significantly simpler.
Introducing the App Connect Enterprise Operator
To deploy containers to Kubernetes, there are a few more steps than there were for our simple Docker example. You need to know how to specify your deployment requirements into the underlying Kubernetes constructs – not easy if it’s your first time using a container platform. Luckily there is a standard mechanism in Kubernetes to simplify all that, known as an Operator. This is a piece of software provided along with the App Connect Enterprise certified container based on the open-source Operator Framework. The Operator for App Connect understands how to take your key deployment requirements and translate them into the necessary Kubernetes constructs, providing a much simpler way to interact with Kubernetes.
The list of things that this operator takes care of is constantly increasing in line with the operator maturity model, but here are some of the current highlights:
- Translates your requirements into Kubernetes constructs such as Deployments, Pods, Routes, Services, NodePorts, Replica Sets.
- Links your deployment with any environment-specific “configurations” your container will need at runtime (more on these later). It also watches these configurations for changes and ensures they are rolled out to any containers that are reliant on them.
- The operator tracks the Custom Resource Definition (CRD) like IntegrationServer and identifies change events.
- The operator reconciles the CRD state with the desired state.
- Applications (in our case Integration Server ) based on operators retain flexibility and can be managed using kubectl and other Kubernetes native tools.
There are a number of other services that the Operator provides, but we’ll explore those later in the post.
Operators need to be downloaded into the OpenShift catalog to be installed in a Kubernetes environment. Once there, they become just another of the native Kubernetes “resources”. This means that we can view it, and communicate with it using the standard Kubernetes APIs, command line, and user interface, just like any other resource in Kubernetes.
It is worth noting that if you have already installed the IBM Cloud Pak for Integration on your OpenShift Cluster, then the IBM App Connect Enterprise Operator will already have been installed and you can skip this step.
Install IBM App Connect Operator by following the instructions as documented here.
Enabling the Container To Retrieve the BAR File
In Scenario 1, we passed the BAR file to the container by mounting it from the local file system. Our containers don’t have access to a local file system in the same way in a Kubernetes environment, so we will need another technique.
In this demonstration we have chosen to make our BAR files available via HTTP, hosting them on a URL. We could host them on any URL server, but for simplicity, in the first part of this tutorial, our BAR file is hosted on public GitHub. This technique of performing deployments based on a repository is heading in the right direction for setting up continuous integration and continuous delivery (CI/CD) pipelines – something for us to explore more in future posts.
You will need to provide basic (or alternative) authentication credentials for connecting to the URL endpoint where the BAR files are stored. Properties and credentials often change as we move from one environment to another, so we need a way to pass these in at deployment time. We do this by creating what is known as a “configuration object” in the Kubernetes environment, and then referencing this configuration object when we deploy our container. Let’s explore in a little more detail what a configuration object is as they will be really important when we come to more complex integrations.
Introducing “Configuration Objects”
We need a mechanism to pass to the container any environment-specific information that it will need at runtime. All your existing integrations involve connecting to things (databases, queues, FTP servers, TCP/IP sockets, etc.) and each requires authentication credentials, certificates, and other properties which will have different values depending on which environment you are deployed to.
In your existing IBM Integration Bus environment, you’ll be familiar with mechanisms such as odbc.ini
files for database connection properties and using the mqsisetdbparms
command to set up authentication credentials (e.g. user ID and passwords) for the various systems you connect to. To make these same credentials and properties available to our container in Kubernetes, we create “Configuration” objects. The full list of configuration types is listed at the bottom of this page in the documentation.
Our simple integration for this scenario doesn’t actually connect to anything at runtime. However, the container itself does need credentials to connect to the URL to get the BAR file on startup, so it is for this that we need to create our first configuration object.
Creating Configuration Object
A configuration object is created just like any other object in Kubernetes, using a set of values in a YAML formatted text file. Inside that YAML file, we will embed the authentication credentials themselves, encoded in Base64.
Prepare the Authentication Credentials
The authentication credentials must be formatted in the following way:
{"authType":"BASIC_AUTH","credentials":{"username":"myUsername","password":"myPassword"}}
Where myUsername
and myPassword
are the user ID and password required to connect to the URL where the BAR file is located. In our case, this is public GitHub, so in fact, no username and password are required, that's why our credentials will look like this:
{"authType":"BASIC_AUTH","credentials":{"username":"","password":""}}
Base64 Encode the Credentials
The configuration file requires that the credentials are Base64-encoded. You can create the Base64 representation of this data using the command shown below:
$ echo '{"authType":"BASIC_AUTH","credentials":{"username":"","password":""}}' | base64
The result will be:
eyJhdXRoVHlwZSI6IkJBU0lDX0FVVEgiLCJjcmVkZW50aWFscyI6eyJ1c2VybmFtZSI6IiIsInBhc3N3b3JkIjoiIn19Cgo=
Create the Definition File for the Configuration Object
The following YAML code shows an example of what your configuration object should look like:
apiVersion: appconnect.ibm.com/v1beta1 kind: Configuration metadata: name: github-barauth namespace: ace-demo spec: data: eyJhdXRoVHlwZSI6IkJBU0lDX0FVVEgiLCJjcmVkZW50aWFscyI6eyJ1c2VybmFtZSI6IiIsInBhc3N3b3JkIjoiIn19Cgo= description: authentication for github type: barauth
Create a text file named github-barauth.yaml with the above contents.
The important fields in the file are as follows:
- The kind parameter states that we want to create a Configuration object.
- The name parameter is the name we will use to refer to this configuration object later when creating the integration server.
- The data parameter is our Base64 encoded credentials.
- The type parameter of barauthnotes that these are the credentials to be used for authentication when downloading a BAR file from a remote URL.
You can read more about the barauth configuration object here.
Log in to the OpenShift Cluster
To create the configuration we must first be logged in to our OpenShift cluster
$ oc login --token=xxxxxxxxx --server=https://yyyyyy.ibm.com:6443
Create the Configuration Object in OpenShift
We will now create the Configuration object within our Red Hat OpenShift environment using the YAML file that we created above:$ oc apply -f github-barauth.yaml
The command “oc” is the Red Hat OpenShift equivalent of the Kubernetes command “kubectl” and is essentially identical. You can check the status of your configuration object or list all the Configuration objects that you have created using the following command:
$ oc get Configuration
NAME AGE github-barauth 4 m
Creating an Integration Server
We are finally ready to deploy an IBM App Connect certified container with a link to our BAR from the command line. To do this we must first create a YAML definition file for the Integration Server object, which must look like the following.
apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: http-echo-service namespace: ace-demo labels: {} spec: adminServerSecure: false barURL: >- https://github.com/amarIBM/hello-world/raw/master/HttpEchoApp.bar configurations: - github-barauth createDashboardUsers: true designerFlowsOperationMode: disabled enableMetrics: true license: accept: true license: L-KSBM-C37J2R use: AppConnectEnterpriseProduction pod: containers: runtime: resources: limits: cpu: 300m memory: 350Mi requests: cpu: 300m memory: 300Mi replicas: 1 router: timeout: 120s service: endpointType: http version: '12.0'
Save this YAML file as http-echo-service.yaml.
The two most important parameters to note are:
- barURL which denotes the URL where our BAR file resides.
- configurations point to the configuration object we created in the previous section.
It’s worth noting that although we have only one integration flow in our container, you can have many. Indeed you could have multiple App-Connect “applications” in your BAR file, and you can even specify multiple BAR files in the above barURL parameter by using a comma-separated list. For example :
barURL: >-
https://github.com/amarIBM/hello-world/raw/master/HttpEchoApp.bar,https://github.com/amarIBM/hello-world/raw/master/CustomerOrderAPI.bar
Some considerations apply if deploying multiple BAR files:
- Ensure that all of the applications can coexist (with no names that clash).
- Ensure that you provide all of the configurations that are needed for all of the BAR files.
- All of the BAR files must be accessible by using the single set of credentials that are specified in the configuration object of type BarAuth.
Now deploy the Integration Server YAML to your OCP cluster using the steps below:
- Login to your OCP cluster.
$ oc login --token=xxxxxxxxx --server=https://yyyyyy.ibm.com:6443
- Create the Integration Server using the following command
$ oc apply -f http-echo-service.yaml
You should receive the confirmation:
integrationserver.appconnect.ibm.com/http-echo-service created
- Verify the status of the Integration Server pod.
In Kubernetes containers are always deployed within a definition called a pod. Let's look for the one we just created.
$ oc get pods
NAME READY STATUS RESTARTS AGE http-echo-service-is-64bc7f5887-g6dcd 1/1 Running 0 67s
You’ll notice that it states “1/1”, meaning that we requested only one replica of this container (replicas: 1 in the definition file), and that requested replica has been started. Later on in this scenario we’ll explore how Kubernetes dynamically can scale up and down, evenly load balancing across replicas, automatically re-instate pods if they fail, and roll out new replicas with no downtime.
You can also verify the status of your Application by looking at pod log
$ oc logs <pod name>
2021-11-17 10:16:06.172262: BIP2155I: About to 'Start' the deployed resource 'HTTPEcho' of type 'Application'. An http endpoint was registered on port '7800', path '/Echo'. 2021-11-17 10:16:06.218826: BIP3132I: The HTTP Listener has started listening on port '7800' for 'http' connections. 2021-11-17 10:16:06.218973: BIP1996I: Listening on HTTP URL '/Echo'.
From the pod logs, we can see that the deployed HTTPEcho service is listening on service endpoint “/Echo”. - Get the external URL for your service using ‘routes’
$ oc get routes
NAME HOST/PORT PATH SERVICES http-echo-service-http http-echo-service-http-ace-demo.apps.cp4i-2021-demo.cp.fyre.ibm.com http-echo-service-is http-echo-service-https http-echo-service-https-ace-demo.apps.cp4i-2021-demo.cp.fyre.ibm.com http-echo-service-is
- Invoke the service using the curl command using the first URL you found in the previous step.
$ curl -X POST http://http-echo-service-http-ace-demo.apps.cp4i-2021-demo.cp.fyre.ibm.com/Echo
You should receive a response similar to the following, letting you know that your request made it into the container and returned back.
<Echo> <DateStamp>2021-11-17T06:03:59.717574Z</DateStamp> </Echo>
So that’s it, you’ve done it! You’ve deployed a (very) simple flow from an IBM Integration Bus environment into an IBM App Connect container in a Red Hat OpenShift environment. Once we had the correct definition files created, you can see it was only a handful of commands to perform the deployment. You can imagine how easy it would be to have those files in the repository too and incorporate the deployment into an automated pipeline.
Acknowledgment and thanks to Kim Clark for providing valuable inputs to this article.
Published at DZone with permission of Amar Shah. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments