App Connect

 View Only

Deploy a simple App Connect Toolkit message flow onto Red Hat OpenShift using the command line interface (CLI)

By AMAR SHAH posted Sun November 21, 2021 12:05 PM

  

This blog is part of a series. For the whole series list see here

Scenario 2a : Deploy a simple Toolkit message flow onto Red Hat OpenShift using the command line interface (CLI)


In
Scenario 1 we took a simple flow from IBM Integration Bus and demonstrated we could get it running in IBM App Connect Enterprise on an isolated Docker container. This was a good start, but of course in a real production environment we would need more than just an isolated container. We would want to be able to create multiple copies of the container in order to provide high availability, and to scale up when the incoming workload increases. We would also need to be able to automatically spread workload across all those container replicas. This and much more is what is provided by a container orchestration platform, and the most commonly used platform today is of course Kubernetes. In this Scenario, we’re going to take that same simple flow and deploy it onto Kubernetes and demonstrate some of these orchestration platform features.

We’re going to use Red Hat OpenShift since it is one of the most widely used and mature Kubernetes platforms. One of the great things about OpenShift is that it provides a consistent experience whether you are using it in a standalone installation on your own infrastructure, or through a managed service. So you could use a self-installed OpenShift environment, or any of the many managed OpenShift services from IBM, AWS, or Azure and the instructions will be largely identical.   OpenShift also brings many other benefits, some of which we’ll discuss as we go through.

The key differences compared to Scenario 1  will be:

  • Remote pull of the bar file: In Scenario 1 we were running Docker locally so we could pass the bar file to the container from the local file system. In this scenario we will show how the container can pull the bar file from a remote location over HTTP.
  • Deployment via an Operator: We will use an additional component know as an Operator to help us set up the container in OpenShift. This will perform the vast majority of the underlying work for us, significantly simplifying the deployment.
  • Configuration object: We will see our first example of a “configuration” object. In this scenario it will be the credentials for the HTTP request to retrieve the bar file.
  • Deployment using the Kubernetes command line: We will show how we can use a single standard Kubernetes command to do the deployment.

Accessing an OpenShift environment

To do this scenario you will need access to a Red Hat OpenShift environment, which is the market leading productionised implementation of Kubernetes. This could be one you install yourself, although it would probably be easier to simply use a managed environment such as Red Hat OpenShift on IBM Cloud, AWS ROSA, Azure Red Hat OpenShift. In a later post we will show how to install on a non-OpenShift Kubernetes environment, but we thought we would show OpenShift first as it makes the process significantly simpler.

Introducing the App Connect Enterprise Operator

To deploy containers to Kubernetes, there are a few more steps than there were for our simple Docker example. You need to know how to specify your deployment requirements into the underlying Kubernetes constructs – not easy if it’s your first time using a container platform. Luckily there is a standard mechanism in Kubernetes to simplify all that, known as an Operator. This is a of piece of software provided along with the App Connect Enterprise certified container based on the open source Operator Framework. The Operator for App Connect understands how to take your key deployment requirements and translate them into the necessary Kubernetes constructs, providing a much simpler way to interact with Kubernetes.

The list of things that this operator takes care of is constantly increasing in line with the operator maturity model, but here are some of the current highlights.

  • Translates your requirements into Kubernetes constructs such as Deployments, Pods, Routes, Services, NodePorts, Replica Sets.
  • Links your deployment with any environment specific “configurations” your container will need at runtime (more on these later). It also watches these configurations for changes and ensures they are rolled out to any containers that are reliant on them.
  • The operator tracks the Custom Resource Definition (CRD) like IntegrationServer and identifies change events.
  • The operator reconciles the CRD state with the desired state.
  • Applications (in our case Integration Server ) based on operators retain flexibility, and can be managed using kubectl and other Kubernetes native tools.

There are a number of other services that the Operator provides, but we’ll explore those later in the post.

Note that there is a separate set of articles and webinars in this series that focus specifically on the operator in depth. Look for “App Connect Operator” section on the series content page (http://ibm.biz/iib-ace).  

Operators need to be downloaded into the OpenShift catalogue to be installed in a Kubernetes environment. Once there, they become just another of the native Kubernetes “resources”. This means that we can view it, and communicate with it using the standard Kubernetes APIs, command line and user interface, just like any other resource in Kubernetes.
Install the CatalogSource object by using the OpenShift CLI or web console as documented in the link:

https://www.ibm.com/docs/en/app-connect/containers_cd?topic=access-enabling-operator-catalog

It is worth noting that if you have already installed the IBM Cloud Pak for Integration on your OpenShift Cluster, then the IBM App Connect Enterprise Operator will already have been installed and you can skip this step.

Install IBM App Connect Operator by following the instructions as documented in the link :

https://www.ibm.com/docs/en/app-connect/containers_cd?topic=operator-from-openshift-web-console

Enabling the container to retrieve the BAR file

In Scenario 1, we passed the BAR file to the container by mounting it from the local file system. Our containers don’t have access to a local file system in the same way in a Kubernetes environment, so we will need another technique.

In this demonstration we have chosen to make our BAR files available via HTTP, hosting them on a URL. We could host them on any URL server, but for simplicity, in the first part of this tutorial our BAR file is hosted on public GitHub (https://github.com/amarIBM/hello-world). This technique of performing deployments based on a repository is heading in the right direction for setting up continuous integration and continuous delivery (CI/CD) pipelines – something for us to explore more in future posts.

You will need to provide basic (or alternative) authentication credentials for connecting to the URL endpoint where the BAR files are stored. Properties and credentials often change as we move from one environment to another, so we need a way to pass these in at deployment time. We do this by creating what is known as a “configuration object” in the Kubernetes environment, and then referencing this configuration object when we deploy our container. Let’s explore in a little more detail what a configuration object is as they will be really important when we come to more complex integrations.

Introducing “configuration objects”

We need a mechanism to pass to the container any environment specific information that it will need at runtime. All your existing integrations involve connecting to things (databases, queues, FTP servers, TCP/IP sockets etc.) and each requires authentication credentials, certificates and other properties which will have different values depending on which environment you are deployed to.

In your existing IBM Integration Bus environment you’ll be familiar with mechanisms such as odbc.ini files for database connection properties and using the mqsisetdbparms command to set up authentication credentials (e.g. user ID and passwords) for the various systems you connect to. To make these same credentials and properties available to our container in Kubernetes, we create “Configuration” objects. The full list of configuration types is listed at the bottom of this page in the documentation.

Our simple integration for this scenario doesn’t actually connect to anything at runtime. However, the container itself does need credentials to connect to the URL to get the BAR file on start up, so it is for this that we need to create our first configuration object.

Creating a configuration object

A configuration object is created just like any other object in Kubernetes, using a set of values in a YAML formatted text file. Inside that YAML file we will embed the authentication credentials themselves, encoded in Base64.

1)     Prepare the authentication credentials

 

The authentication credentials must be formatted in the following way:

 {"authType":"BASIC_AUTH","credentials":{"username":"myUsername","password":"myPassword"}}

Where myUsername and myPassword are the user ID and password required to connect to the URL where the BAR file is located. In our case, this is public GitHub, so in fact no username and password are required, so our credentials will look like this:

 {"authType":"BASIC_AUTH","credentials":{"username":"","password":""}}

2)     Base64 encode the credentials

 

The configuration file requires that the credentials are Base64-encoded. You can create the base64 representation of this data using command shown below

 $ echo '{"authType":"BASIC_AUTH","credentials":{"username":"","password":""}}' | base64

 The result will be

eyJhdXRoVHlwZSI6IkJBU0lDX0FVVEgiLCJjcmVkZW50aWFscyI6eyJ1c2VybmFtZSI6IiIsInBhc3N3b3JkIjoiIn19Cgo=

 

3)     Create the definition file for the configuration object

The following YAML code shows an example of what your configuration object should look like:

apiVersion: appconnect.ibm.com/v1beta1
kind: Configuration
metadata:
  name: github-barauth
namespace: ace-demo

spec:
  data: eyJhdXRoVHlwZSI6IkJBU0lDX0FVVEgiLCJjcmVkZW50aWFscyI6eyJ1c2VybmFtZSI6IiIsInBhc3N3b3JkIjoiIn19Cgo=
  description: authentication for github
  type: barauth

 

 

Create a text file named github-barauth.yaml with above contents.

The important fields in the file are as follows:

  • The kind parameter states that we want to create a Configuration object.
  • The name parameter is the name we will use to refer to this configuration object later when creating the integration server.
  • The data parameter is our Base64 encoded credentials.
  • The type parameter of barauth notes that these are the credentials to be used for authentication when downloading a BAR file from a remote URL.

You can read more about the barauth configuration object here.

4)     Log in to the OpenShift cluster

To create the configuration we must first be logged in to our OpenShift cluster

$ oc login --token=xxxxxxxxx  --server=https://yyyyyy.ibm.com:6443

5)     Create the configuration object in OpenShift

We will now create the Configuration object within our Red Hat OpenShift environment using the YAML  file that we created above.

$ oc apply -f github-barauth.yaml

The command “oc” is the Red Hat OpenShift equivalent of the Kubernetes command “kubectl” and is essentially identical.

You can check the status of your configuration object or list all the Configuration objects that you have created using following command

$ oc get Configuration

 NAME                                      AGE
github-barauth                             4 m

Creating an Integration Server

We are finally ready to deploy an IBM App Connect certified container with a link to our BAR from the command line. To do this we must first create a YAML definition file for the Integration Server object, which must look like the following.

 

apiVersion: appconnect.ibm.com/v1beta1
kind: IntegrationServer
metadata:
  name: http-echo-service
  namespace: ace-demo
  labels: {}
spec:
  adminServerSecure: false
  barURL: >-
    https://github.com/amarIBM/hello-world/raw/master/HttpEchoApp.bar
  configurations:
    - github-barauth
  createDashboardUsers: true
  designerFlowsOperationMode: disabled
  enableMetrics: true
  license:
    accept: true
    license: L-KSBM-C37J2R
    use: AppConnectEnterpriseProduction
  pod:
    containers:
      runtime:
        resources:
          limits:
            cpu: 300m
            memory: 350Mi
          requests:
            cpu: 300m
            memory: 300Mi
  replicas: 1
  router:
    timeout: 120s
  service:
    endpointType: http
  version: '12.0'

 

Save this YAML file as http-echo-service.yaml.

The two most important parameters to note are:

  • barURL which denotes the URL where our BAR file resides.
  • configurations point to the configuration object we created in the previous section


It’s worth noting that although  we have only one integration flow in our container, you can have many. Indeed you could have multiple App Connect “applications” in your BAR file, and you can even specify multiple BAR files in the above barURL parameter by using a comma-separated list. For example :     

barURL: >-
  https://github.com/amarIBM/hello-world/raw/master/HttpEchoApp.bar,https://github.com/amarIBM/hello-world/raw/master/CustomerOrderAPI.bar


Some considerations apply if deploying multiple BAR files:

  • Ensure that all of the applications can coexist (with no names that clash).
  • Ensure that you provide all of the configurations that are needed for all of the BAR files.
  • All of the BAR files must be accessible by using the single set of credentials that are specified in the configuration object of type BarAuth.

Now deploy the Integration Server yaml to your OCP cluster using the steps below.

  1. Login to your OCP cluster.

$ oc login --token=xxxxxxxxx  --server=https://yyyyyy.ibm.com:6443

  1. Create Integration Server using the following command

$ oc apply -f  http-echo-service.yaml

You should receive the confirmation:           

integrationserver.appconnect.ibm.com/http-echo-service created

  1. Verify the status of Integration Server pod.

In Kubernetes containers are always deployed within a definition called a pod. Let's look for the one we just created.

$ oc get pods

NAME                                       READY   STATUS    RESTARTS   AGE
http-echo-service-is-64bc7f5887-g6dcd      1/1     Running   0          67s

 You’ll notice that it states “1/1”, meaning that we requested only one replica of this container (replicas: 1 in the definition file), and that requested replica has been started. Later on in this scenario we’ll explore how Kubernetes dynamically can scale up and down, evenly load balancing across replicas, automatically re-instate pods if they fail, and roll out new replicas with no downtime.

You can also verify the status of your Application by looking at pod log
$ oc logs <pod name>

2021-11-17 10:16:06.172262: BIP2155I: About to 'Start' the deployed resource 'HTTPEcho' of type 'Application'.
An http endpoint was registered on port '7800', path '/Echo'.
2021-11-17 10:16:06.218826: BIP3132I: The HTTP Listener has started listening on port '7800' for 'http' connections.
2021-11-17 10:16:06.218973: BIP1996I: Listening on HTTP URL '/Echo'.

From the pod logs we can see that the deployed HTTPEcho service is listening on service endpoint “/Echo”.

  1. Get the external URL for your service using ‘routes’

 $ oc get routes

NAME                                        HOST/PORT                                                              PATH                                         SERVICES    

 http-echo-service-http    http-echo-service-http-ace-demo.apps.cp4i-2021-demo.cp.fyre.ibm.com           http-echo-service-is
http-echo-service-https   http-echo-service-https-ace-demo.apps.cp4i-2021-demo.cp.fyre.ibm.com          http-echo-service-is 

 

  1. Invoke the service using curl command using the first URL you found in the previous step.

$ curl -X POST http://http-echo-service-http-ace-demo.apps.cp4i-2021-demo.cp.fyre.ibm.com/Echo

You should receive a response similar to the following, letting you know that your request made it into the container and returned back.

 

<Echo>
     <DateStamp>2021-11-17T06:03:59.717574Z</DateStamp>
</Echo>

 

So that’s it, you’ve done it! You’ve deployed a (very) simple flow from an IBM Integration Bus environment into an IBM App Connect container in a Red Hat OpenShift environment. Once we had the correct definition files created, you can see it was only a handful of commands to perform the deployment. You can imagine how easy it would be to have those files in the repository too and incorporate the deployment into an automated pipeline.

Acknowledgement and thanks to Kim Clark for providing valuable  inputs to this article.

#IntegrationBus(IIB)#AppConnectEnterprise(ACE)#Docker#redhatopenshift #App-Connect-Operator

​​
3 comments
191 views

Permalink

Comments

Mon April 25, 2022 07:55 PM

Hi Abu, thank you for your feedback. This is doable, but some extra configuration is required. Using the ACE Dashboard can provide this functionality out of the box.

Mon December 20, 2021 08:15 AM

Great article. Was wondering if you update the bar file in the git repo will the ACE operator automatically redeploy a new ACE integration server pod?

Mon November 22, 2021 12:40 AM

Very good and detailed series on App connect. Thanks for sharing.. Looking forward for complex scenarios