Cloud Pak for Integration

Cloud Pak for Integration

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Building MQ Cluster Using Cloud Pak for Integration

By Kapila Arora posted Thu May 14, 2020 03:40 AM

  

1       Authors

This article has been authored by Kapila Arora along with Lalita Aryasomayajul.

2        Purpose

The purpose of this document is to provide the configuration details for building MQ cluster topology for high service availability and workload balancing.

3       Cloud Pak for Integration (CP4I)

CP4I 2019.3.2 has been provisioned on the OpenShift Cluster (OCP 3.11) on IBM Public Cloud.  MQ Advanced version 9.1.3 has been used for this demonstration.

4         Prerequisite Steps

  1. Installation of Cloud Pak for Integration (CP4I) 2019.3.2 on top of OCP3.11 on IBM Public Cloud.
  2. To run the ‘oc’ commands from Local, download and Install the OpenShift client on Local - “openshift-origin-client-tools-v3.11.0-0cbc58b-windows “ .
  3. To run the ‘rsync’ command from local windows, you need to have the rsync utility downloaded and installed on local machine. This utility is: “cwrsync_5.5.0_x86_free”.
  4. Local windows machine has MQ server installed with sample MQ utilities. Team has used the IBM MQ 9.1.4 version installed to use MQ sample utilities like amqsput, amqsget, amqsphac and amqsghac to test workload balancing in the MQ Cluster Queue Managers on CP4I from local machines.

5      MQ Cluster Design

Three MQ service instances are created on CP4I for the demonstration of MQ Cluster. Three Single-resilient Queue managers QMFR1, QMFR2 and QMPR are configured as per below depicted MQ cluster design.  QMFR1 and QMFR2 are full Repository QMS and QMPR is partial repository queue manager.  For demonstration, you can have one full repository queue manager and other can be partial repository queue manager for applications. 

MQ Cluster Design

5.1    Queue Manager Provisioning on Cloud Pak for Integration 

Please refer this document for single resilient Queue Manager provisioning in the Cloud Pak for Integration . This use the single storage for data , log and queue manager data. This Queue Manager configurations are simple and demo front , not addressing the security aspects.  You can apply the configurations based on your requirements.
MQ_Capability_Deployment_Steps.docx


5.2     Provisioned Queue Managers on Cloud Pak for Integration
Below are the MQ instances created , are in  ‘Running’  state on CP4I

Queue Manager Provisioned

Once the three MQ QMGR instances are created, run the 'oc get pods'

Queue Manager Provisioned


Run the  ‘oc get services’ command to view their details like port numbers, service names etc .By Default , CP4I enabled the NodePort for MQ services , These services are accessed by external mapped ports.

Queue Manager Provisioned Services Details

Note:  Security has been disabled for all three QMs  as discussed below

  1. Clear the tab value ‘Connection authentication’ under the ‘Extended’ tab of QM properties.
  2. Make the CHLAUTH to ‘Disabled’ under the ‘Communication’ tab of QM properties.
  3. Save all the changes on QMs console.
  4. Then refresh the security on QMs.

6         Cluster configuration

These three queue managers have identical configurations having  same cluster queue created and are a part of cluster called “MYCLUSTER” repository. In this Setup  QMFR1 and QMFR2 are full repository QM while QMPR is partial repository QM.


Below Configurations for QMFR1, Access the QMFR1 Pod and run "runmqsc <QMNAME>

Access QMFR1 Pod to apply Configurations


Command

ALTER QMGR REPOS(MYCLUSTER)

Queue Manager Property - Repository

 

 
Cluster Sender and receiver channels for each Queue Manager to communicate with other Queue Manager are created using MQ commands inside MQ Pod.

Command:
DEFINE CHANNEL(MYCLUSTER.QMFR2) CHLTYPE(CLUSSDR)  CONNAME(‘sr-qm4-ibm-mq(1414)') CLUSTER(MYCLUSTER)”

"DEFINE CHANNEL(MYCLUSTER.QMFR1) CHLTYPE(CLUSRCVR) CONNAME(sr-qm3-ibm-mq(1414)') CLUSTER(MYCLUSTER)”

DEFINE CHANNEL(MYCLUSTER.QMPR) CHLTYPE(CLUSSDR)  CONNAME(‘sr-qm5-ibm-mq(1414)') CLUSTER(MYCLUSTER)”

 

QMFR1 Cluster Channels

 

 

Similarly, Repository and cluster sender/receiver channels will be applied or created on QMFR2 from corresponding MQ Pods .

Commands:
ALTER QMGR REPOS(MYCLUSTER)

DEFINE CHANNEL (MYCLUSTER.QMFR1) CHLTYPE(CLUSSDR) CONNAME ('sr-qm3-ibm-mq (1414)') CLUSTER(MYCLUSTER)

DEFINE CHANNEL (MYCLUSTER.QMFR2) CHLTYPE(CLUSRCVR) CONNAME ('sr-qm4-ibm-mq (1414)') CLUSTER(MYCLUSTER).

 

Similarly, cluster sender and receiver channels has been created on QMPR from corresponding MQ Pods .
Commands:

DEFINE CHANNEL (MYCLUSTER.QMFR1) CHLTYPE(CLUSSDR) CONNAME ('sr-qm3-ibm-mq (1414)') CLUSTER(MYCLUSTER)

DEFINE CHANNEL (MYCLUSTER.QMPR) CHLTYPE(CLUSRCVR) CONNAME ('sr-qm5-ibm-mq (1414)') CLUSTER(MYCLUSTER)

 

One Local queue as CLUSQ has been created on each QMGR (QMFR1, QMFR2 and QMPR) as cluster queue from their corresponding MQ Pods.

Command: “DEFINE QLOCAL(CLUSQ) CLUSTER(MYCLUSTER) CLWLUSEQ(ANY)”

And in this queue, set the property ‘Default bind type’ to ‘Not fixed’. This property will help to do workload balance inside the cluster queue Managers. You can do from MQ Console or MQ Commands from MQ Pods.

Queues Property

7        Client and Server Connection Channel Setup

For the MQ Client Applications connections to MQ server, we will create a new Client-Connection and Server-connection channel with same names, under each Queue manager. This will be used by CCDT. This CCDT will be used by MQ Client Applications to connect to MQ Server QMs.

7.1      Creation of Server Connection Channel
Run the below command from MQ Pod

Command:

DEFINE CHANNEL(QMFR1.APP.CHL) CHLTYPE(SVRCONN) TRPTYPE(TCP) MCAUSER('mqm')

Server Connection Channel on QMFR1

 

The property MCA for this channel must be set to ‘mqm’.

Server Connection Channel Property -MCA


7.2      Creation of Client Connection Channel
Run the below command for creating Client connection channel correspond to above server connection channel from corresponding MQ Pod.

Command:
DEFINE CHANNEL(QMFR1.APP.CHL) CHLTYPE(CLNTCONN) TRPTYPE(TCP) CONNAME (sr-qm3-ibm-mq(1414)) QMNAME(QMFR1) DESCR('Client-connection to QMFR1 Server')

Client Connection Channel on QMFR1

Similarly, Client/Server channels created for other QMs: QMFR2 and QMPR with appropriate names as provided below the commands for same.

QMFR2 Commands:

DEFINE CHANNEL(QMFR2.APP.CHL) CHLTYPE(SVRCONN) TRPTYPE(TCP) MCAUSER('mqm')

DEFINE CHANNEL(QMFR2.APP.CHL) CHLTYPE(CLNTCONN) TRPTYPE(TCP) CONNAME (sr-qm4-ibm-mq (1414)) QMNAME(QMFR2) DESCR ('Client-connection to QMFR2 Server')

 

 QMPR Commands:

DEFINE CHANNEL(QMPR.APP.CHL) CHLTYPE(SVRCONN) TRPTYPE(TCP) MCAUSER('mqm')

DEFINE CHANNEL(QMPR.APP.CHL) CHLTYPE(CLNTCONN) TRPTYPE(TCP) CONNAME (sr-qm5-ibm-mq(1414)) QMNAME(QMPR) DESCR('Client-connection to QMPR Server')

 

8        CCDT File Configuration


Host name is icp proxy url  and Port is external mapped port against the each queue manager on 1414. Please refer the "oc get svc" command to get it. These Services are enabled for NodePort by CP4I itself for externally access these services from Cluster.

Create a CCDT (json format) file containing the client-connection channel names as created in section 7.2 , host and port details of each queue manager. There are two entry per Queue Manager, one with group name and aother with queue manager name.  Cluster_CCDT.json

 This CCDT  placed on local machine to be used by MQ Client /Applications from Local environment to  connect to Queue Manager on Cloud Pak for Integration.

9         MQ application set-up

To configure the MQ client application on local windows setup, we need to set the env variables for it.

  1. If running the MQ Client application from a local Windows system :
  2. Navigate to MQ_Installable path in command prompt
  3. Set MQCHLLIB= {path where you placed the CCDT json file}
  4. Set MQCHLTAB={CCDT json file name}.
  5. Set MQAPPLNAME=<application name>
Requirement Environment Variable displayed

 

10    Test Scenario Execution for MQ Cluster

 

Demonstrate the MQ cluster workload balancing

Use the ‘amqsphac.exe’ command to put messages on any one of the Cluster Queue Managers's local queue on CP4I.Currently these messages has been put on QMPR ClusQ, which required to be workload Balanced on each ClusQ among QMFR1 , QMFR2 and QMPR.

>amqsphac <Cluster queue> <Cluster Queue Manager>

MQ Client Applications Putting Data on ClusQ on QMPR

Once 15 messages are sent, please go to console - MQ instances of all 3 QMGRs to check the queue depth in the ‘CLUSQ’ for each QMGR : QMFR1,QMFR2 and QMPR. You can find messages get distributed almost evenly across all three QMGRs.

 
QMFR1 - checked Queue Depth

QMFR1 Queue Depth on CLUSQ



QMFR2  - Checked the Queue Depth

QMFR2 Queue Depth of CLUSQ



QMPR  - Checked the Queue Depth

QMPR Queue Depth for CLUSQ

 

This way, we realized the cluster workload balancing in the Cluster Queue managers.  There has been few other test scenario has been ran while changing this sample utility for mq persistence message property to check the data loss while one of Queue Manager gets restarted while workload processing.  You can also use their own MQ Client Applications or App connect flows with  MQ Adapters to realize these behaviors.

 

11    Supporting Team

Thanks to Kiran Darbha , Matthew Whitehead, Arthur Barr and Callum Jackson for supporting to realize MQ Cluster  on MQ containers using Cloud Pak for Integration .

 

 

0 comments
41 views

Permalink