Message Image  

Modernizing Integration – Migration from IIB to App Connect running on IBM Cloud Pak for Integration(CP4I)

 View Only
Tue July 28, 2020 06:28 AM

1. Installing and configuring CP4I

Refer to this blog for installation instructions
Deploying IBM Cloud Pak for Integration 2019.4 on OCP 4.2

Compatability matrix

Note: Pre-req – it is assumed that a Red Hat OpenShift cluster is already installed on your vmware on-prem environment. This article focuses on installing CP4I and configuring ACE.

2. Migration approach flowchart at a high level

The following flowchart describes the high level steps or activities you may need to perform in order to migrate your existing IIB assets on to ACE running in a CP4I container environment.

3. Run the ACE Transformation Advisor Tool to determine container readiness of your ACE applications

ACE Fix pack 7 (11.0.0.7) provides a built-in Transformation Advisor command, which provides support for the analysis your existing IBM Integration Bus v10 integration nodes of any potential issues if you plan to move your architecture to adopt containers.

If you ran the TADataCollector command with the run parameter, a static HTML report is produced. The report lists any issues that are found for each integration server under the integration node.

The Overall Complexity Score is assessed as either Simple, Moderate, or Complex and the following guidelines apply;

The Transformation Advisor tool also provides a severity classification for each issue it uncovers:


Running the Transformation Advisor tool

Steps to Run Transformation Advisor:

  • Create a backup file of your IBM Integration Bus Version 10.0 integration node by running the mqsibackupbroker command:

    mqsibackupbroker node10 -d C:\temp -a node10.zip

  • On the ACE V11.0.0.7 command console, run the TADataCollector command with the following options:

    mkdir C:\TADemo
    set TADataCollectorDirectory=C:\TADemo
    TADataCollector ace run C:\temp\node10.zip

  • The command will create a report as a static HTML page. An example of the layout is shown in the screenshot below:

An example of the TA report – summary section

An example of the TA report – Detailed assessment section with recommendations.

You can follow the links in the summary table, or just scroll down the page to view more detailed information about the issues that have been found in each integration server:


The initial release of this tool looks for message flow node instances which may require consideration when moving to containers.

4. Refactor integration flows

Based on the Transformation Advisor report, you may need to refactor your integration flows to make them container ready.

When do I need to Refactor?

  • Behavioral changes between source and target versions – Refer to this KC link
  • Deprecated/discontinued nodes – Refer to this KC link
  • Integration Node vs Standalone Integration Server (SIS) deployment topology
  • Independent projects – needs to be converted to Application Project in ACE
  • Fine grained deployment – restructure BAR files
  • Replace local qmgr binding with remote client connection for stateless deployment
  • Hybrid Integration Scenario – Interfacing with SaaS apps
  • Replace configurable services with Policy Projects

Refer to the IBM Redbook on Agile integration for detailed discussion and guidance: Accelerating Modernization with Agile Integration

Here are few Grouping criteria for consideration while refactoring :
This is explained in detail in the above Redbook in section 7.2 Splitting up the ESB: Grouping integrations in a containerized environment

1. Splitting by Business domain

2. Grouping within Domain

3. Shared Lifecycle

4. Scalability requirements

5. Local MQ Server dependency

6. Synchronous vs Asynchronous dependencies

7. Cross dependencies

5. ACE Container options for Deployment

The CP4I platform provides three types of images for deploying your App Connect/IIB BAR files as shown the figure below. You can refer to the Redbook mentioned in the above section which explains and helps you decide which image type to use for deploying the integration flows.

6. Creating an App Connect integration server in CP4I to run your BAR file resources

Using Platform Navigator:

The steps described in the Knowledge Center page (link attached below) are applicable for deploying Designer flows (exported as a BAR file) as well as Toolkit authored message flows.
Cloud Pak for Integration only: Creating an integration server to run your BAR file resources

Using Command Line or CI-CD pipeline:

You can also build a CI-CD pipeline to create an integration server. Before the integration server starts, the container is checked for the folder /home/aceuser/initial-config. This allows you to inject configuration into the image as files. You could refer to the link below to inject configurations as file:

https://github.com/ot4i/ace-docker/blob/master/README.md

To build the CI-CD pipeline, you could follow the steps below:

Refer to section 7.5 in the guide below for building the CI-CD pipeline

Accelerating Modernization with Agile Integration

Openshift ‘service’ is provisioned by default during Helm Deploy, which can be used to access the APIs/Services deployed on the integration server from within the cluster.
Run the command below to get the service name:

oc get svc -n <namespace>


By default the name of service is in the format <helm release name>-<chart name>. The fully qualified service name takes below format by default:
<helm release name>-<chart name>.<namespace>.svc

As an example, we take the service name test-rel-ibm-ace-server-icp4i-prod in the above screenshot . The fully qualified name for this service would become :
test-rel-ibm-ace-server-icp4i-prod.ace.svc

This service exposes the HTTP interface on port 7800. To describe the example, a ‘ping’ API has been deployed on this integration server. So the applications deployed on this cluster, can access the ‘ping’ API using below URL:

http://test-rel-ibm-ace-server-icp4i-prod.ace.svc:7800/ping

The screenshot below shows accessing the ping API from an MQ container deployed on the cluster.


If the API needs to be accessed from outside, then an OCP route for the service should be exposed. Use the command below to expose the port 7800 of the service ‘test-rel-ibm-ace-server-icp4i-prod’:

oc expose svc test-rel-ibm-ace-server-icp4i-prod –port=7800


Now you can access the ‘ping’ API from outside

Deploying More than one BAR file on an integration server:

As discussed in section 4, after you segregate your IIB applications/services into groups for deployment you may package one group of application/services into one BAR file and deploy. However in certain scenarios, where the applications/services of a group might be developed by separate developers/teams, they would produce separate BAR files for the applications/services. So you would need to deploy all those BAR files, that makes up one complete application, into the integration server. Currently more than one BAR file can not be deployed into an integration server from the ACE Dashboard UI; however you can deploy more than one BAR file using the CI-CD pipeline or Command-line.
Before the integration server starts, the container is checked for the bar files in folder /home/aceuser/initial-config/bars. You can bake the BAR files into the base image from CP4I and create a docker image for your Integration server. Below is the example docker file:

Now create the image from this docker file, tag it and push to the OCP registry

oc login ${OCP_API_SERVER} -u ${USER} -p ${PASSWORD} --insecure-skip-tls-verify
docker login ${OpenshiftRegistryURL} -u $(oc whoami) -p $(oc whoami -t)
docker build -t ${imagename}:${tag} .
docker tag ${imagename}:${tag} ${targetrepo}/${imagename}:${tag}
docker push ${targetrepo}/${imagename}:${tag}

This image has your BAR files baked into it. You can point to this image when performing helm deployment.

7. Use secrets to configure various properties, policies, db connections, certificates

IBM App Connect override-able configuration must be deployed in a secret on your Kubernetes workspace prior to the integration server deployment. The documentation in the url below shows how to do this using a script supplied by IBM (see section “Installing a sample image” and the generateSecrets.sh script.)
https://github.com/ot4i/ace-helm/blob/master/ibm-ace/README.md

The generateSecrets.sh script will take all the files in that directory and turn them into a Kubernetes secret. If we deploy the container, when the Docker container starts, it loads the secret and extracts the parts and puts them into the appropriate paths of the integration server work directory, such as placing server.conf.yaml in the overrides directory of the integration server’s workdir.

You can create the secret from the command line as:

$ oc login --token=HW433nWfrlk3Lj54uLjheXnC_SJrXXEyfbkZmfJzomw --server=https://api.prod3.os.fyre.ibm.com:6443 -n ace
$generateSecrets.sh my-test-secret
$oc get secret

8. Auto Scaling policies in Openshift

Auto Scaling policy can be defined in OCP admin UI using following option.
Workloads → Horizonal Pod Autoscaler

  • Specify the name of “Deployment” to which you want to apply this auto scaling policy.
  • Specify the max Replicas you want to scale up to.
  • Specify the target CPU utilisation level at which a new pod replica will be spawned.


You can also create a Horizontal Pod Autoscaler from the command line. Follow the steps below:

    • Get the name of deployment using the command
      oc get deployment -n <namespace>

    • Create Horizontal Pod Autoscaler (HPA) for your deployment. For example let us create HPA for the deployment ‘test-rel-ibm-ace-server-icp4i-prod’ with minimum 1 and maximum 3 replicas. This deployment should scale if CPU usage goes beyond 75%.
      oc autoscale deployment/test-rel-ibm-ace-server-icp4i-prod --min 1 --max 3 --cpu-percent=75 -n <namespace>

  • You can check the HPAs using the command:
    oc get hpa -n <namespace>

9. High Availability considerations on K8s platform

In a containerized world there are standardized ways to declaratively define an HA topology (Helm Charts). Furthermore, the components that enable the high availability such as load balancers and service registries do not need to be installed or configured since they are a fundamental part of the platform. Kubernetes has high availability policies built in, and these can be further customized using standard configuration techniques.
At the application level it is the pods that provide high availability. Kubernetes allows you to run multiple pods (redundancy) and in the event of one of the pods or containers failing, Kubernetes will spin up a replacement pod. This way you can ensure the availability of your services at all times.

When configuring an IBM App Connect helm chart, we can define the values for the configurable parameters of your helm chart. Along with various integration server related parameters, you will find an option called ‘Replica Count’. This is the count that represents the number of pods that your application will have running all the time. The default value for the number of Replicas for an ACE helm chart is 3. You can set this value to 1 or more.

You can also edit the number of replicas of your existing deployment using the OCP admin console.

10. Upgrading ACE certified container image in CP4I

Download the IBM certified container image from FixCentral. After the download is complete, follow the steps below:

1. Log in to your cluster from the CLI
Use the syntax as shown in the following command:

cloudctl login -a https://icp-console.apps.prod3.os.fyre.ibm.com -n ace


2. Login to the Openshift container registry.
Ensure that the oc tool and docker is installed in the client machine and you are able to do ‘docker login’ to the OCP registry. Follow the steps 1 and 2 from the article below to install and configure the oc tool and docker

Deploying IBM Cloud Pak for Integration 2019.4 on OCP 4.2

docker login <OCP registry url> -u $(oc whoami) -p $(oc whoami -t)

For example:

docker login default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com -u $(oc whoami) -p $(oc whoami -t)

3. Untar the ACEcc fixpack
tar xvf <ACEcc tar file name>

For example:
tar xvf ACECC-3.0.0_IT31471_IT31472.tar


4. Install the IBM App Connect Enterprise certified container compressed file

A script is bundled with the fix central package to install and load the images to your docker registry
$./loadImages.sh <registry> <namespace>

$./loadImages.sh default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com ace

You can login to the Openshift console and navigate to ImageStreams in the namespace you pushed the new images and verify.


You can point to the new image tag while deploying the new helm release or upgrading the existing helm release.

11. Clean-up the Integration Server

If an Integration Server is not needed anymore, you can clean it up by following these steps.

  1. Delete the Helm release:
    You can delete the helm release from the Cloud Pak Foundation dashboard. Go to
    Administer → Helm Releases

    Click on the three-dots against the helm release for Integration Server and click on Delete.

    Alternatively you can delete the helm release from command line also. Ensure that you have ‘cloudctl’ and ‘helm’ clients installed on your machine.
    You can do cloudctl login to cloud foundation as below:

    cloudctl login -a <icp url> -u <username> -p <password> -n <namespace>

    For example:

    cloudctl login -a icp-console.apps.prod3.os.fyre.ibm.com -u admin -p admin -n ace

    helm delete <release name>--purge --no-hooks --tls
  2. Delete associated objects of your deployment.
    Deleting the helm release will delete all the objects that were created as part of the helm release for the integration server. Other objects like secrets, HPAs, routes, that you created, should be deleted manually after deleting the helm release. For example if you created Route and Horizontal Pod AutoScalar, you can delete them from command line or from the OCP console.
    To delete from the OCP console, go to Networking → Routes and select the namespace.

    To delete the Horizontal Pod Autoscaler, go to Workloads → Horizontal Pod Autoscalers; click on the three dots against the respective HPA and click on ‘Delete Horizontal Pod Autoscaler’.

12. A sample example of Refactoring an integration flow

Scenario : How do I refactor an IIB/App Connect Integration flow having MQInput node to consume messages from a remote queue manager running in another container within the same CP4I instance.

If you have integration flows that interact with MQ queue manager, and if they were being configured to use a local queue manager in a previous version (IIB) you may want to refactor them to use remote client connection to take advantage of stateless container deployment.

To access the Queue manager deployed within the CP4I cluster:

Configure the MQ based nodes connection property as shown below

  • Connection : MQ Client Connection Properties
  • Destination Queue manager : Name of the queue manager you want to connect to within your CP4I instance.
  • Channel Name : server connection channel. You need a SVRCONN channel between ACE and MQ instances to communicate over.
  • Define a SVR-CONN channel in MQ instance

  • Queue manager host name and port number: This can be obtained from the Openshift console

    OCP Admin console → Networking → Services

When you are accessing an MQ instance from within the CP4I platform, you can connect to MQ using its service name.
The format of the service name is :
<service name>.<namespace>.svc

In the above example , it translates to : mq-rel-ibm-mq.mq.svc

You may also opt to define the connection attributes on the MQInput node via the MQEndpoint policy. The policies allow you to control the operational behavior dynamically at runtime.

An example of the MQEndpoint policy with the same connection attributes :

<?xml version=”1.0″ encoding=”UTF-8″?>
<policies>
<policy policyType=”MQEndpoint” policyName=“myMQEndpointPolicy”
policyTemplate=”MQEndpoint”>
<connection>CLIENT</connection>
<destinationQueueManagerName>MyQMGR</destinationQueueManagerName>
<queueManagerHostname>mq-rel-ibm-mq.mq.svc</queueManagerHostname>
<listenerPortNumber>1414</listenerPortNumber>
<channelName>mysvrconn</channelName>
<securityIdentity>mymqmsec</securityIdentity>
<useSSL>false</useSSL>
<SSLPeerName></SSLPeerName>
<SSLCipherSpec></SSLCipherSpec>
</policy>
</policies>

3 comments on"Modernizing Integration – Migration from IIB to App Connect running on IBM Cloud Pak for Integration(CP4I)"

  1. Suresh Patnam1 March 04, 2020

    Thanks Anand & Amar for the MQ Connectivity article. This really helps

    Customer is also looking to debug the flows running on cloud pak

    Can you shed some light how we can debug flows running on Openshift

    Does below process work?

    1. Get the route of the ace server
    2. Get the webuser login details from container terminal… under /home/aceuser/initialconfig/webusers
    3. Connect from the ACR toolkit
    4. Assign a port in server.yaml file
    5. Update the secret and recreate pod
    6. Open the port
    7. Launch Debugger??

    Reply (Edit)
  2. Amar Shah February 28, 2020

    Hi Suresh
    Please refer to the recent blog on Connecting to a Queue Manager on Cloud Pak for Integration

    Thanks.

    Reply (Edit)
  3. Suresh Patnam1 February 26, 2020

    Anand,

    This is really helpful. Could you also please provide instructions to connect queue manager from MQ Explorer or RFHTIL outside from outside the cluster

    Until Cp4I 3.2.2, we had Nodeport associated to Queue Manager Listener. SO we used the proxy hostname and nodeport to connect. But with CP4I 4.1, we have routes for Queue Manager and Web console.

    I tried using the hostname from the QM route and 443 as port . I get MQRC 2009 error

    THanks
    Suresh

    Reply (Edit)


#Integration
#migration
#IntegrationBus(IIB)
#AppConnect
#IBMCloudPakforIntegration(ICP4I)