1 Authors
This article has been authored by Kapila Arora along with Lalita Aryasomayajul.
2 Purpose
The purpose of this document is to provide the configuration details for building MQ cluster topology for high service availability and workload balancing.
3 Cloud Pak for Integration (CP4I)
CP4I 2019.3.2 has been provisioned on the OpenShift Cluster (OCP 3.11) on IBM Public Cloud. MQ Advanced version 9.1.3 has been used for this demonstration.
4 Prerequisite Steps
- Installation of Cloud Pak for Integration (CP4I) 2019.3.2 on top of OCP3.11 on IBM Public Cloud.
- To run the ‘oc’ commands from Local, download and Install the OpenShift client on Local - “openshift-origin-client-tools-v3.11.0-0cbc58b-windows “ .
- To run the ‘rsync’ command from local windows, you need to have the rsync utility downloaded and installed on local machine. This utility is: “cwrsync_5.5.0_x86_free”.
- Local windows machine has MQ server installed with sample MQ utilities. Team has used the IBM MQ 9.1.4 version installed to use MQ sample utilities like amqsput, amqsget, amqsphac and amqsghac to test workload balancing in the MQ Cluster Queue Managers on CP4I from local machines.
5 MQ Cluster Design
Three MQ service instances are created on CP4I for the demonstration of MQ Cluster. Three Single-resilient Queue managers QMFR1, QMFR2 and QMPR are configured as per below depicted MQ cluster design. QMFR1 and QMFR2 are full Repository QMS and QMPR is partial repository queue manager. For demonstration, you can have one full repository queue manager and other can be partial repository queue manager for applications.
5.1 Queue Manager Provisioning on Cloud Pak for Integration
Please refer this document for single resilient Queue Manager provisioning in the Cloud Pak for Integration . This use the single storage for data , log and queue manager data. This Queue Manager configurations are simple and demo front , not addressing the security aspects. You can apply the configurations based on your requirements.
MQ_Capability_Deployment_Steps.docx
5.2 Provisioned Queue Managers on Cloud Pak for Integration
Below are the MQ instances created , are in ‘Running’ state on CP4I
Once the three MQ QMGR instances are created, run the 'oc get pods'
Run the ‘oc get services’ command to view their details like port numbers, service names etc .By Default , CP4I enabled the NodePort for MQ services , These services are accessed by external mapped ports.
Note: Security has been disabled for all three QMs as discussed below
- Clear the tab value ‘Connection authentication’ under the ‘Extended’ tab of QM properties.
- Make the CHLAUTH to ‘Disabled’ under the ‘Communication’ tab of QM properties.
- Save all the changes on QMs console.
- Then refresh the security on QMs.
6 Cluster configuration
These three queue managers have identical configurations having same cluster queue created and are a part of cluster called “MYCLUSTER” repository. In this Setup QMFR1 and QMFR2 are full repository QM while QMPR is partial repository QM.
Below Configurations for QMFR1, Access the QMFR1 Pod and run "runmqsc <QMNAME>
Command:
ALTER QMGR REPOS(MYCLUSTER)
Cluster Sender and receiver channels for each Queue Manager to communicate with other Queue Manager are created using MQ commands inside MQ Pod.
Command:
“DEFINE CHANNEL(MYCLUSTER.QMFR2) CHLTYPE(CLUSSDR) CONNAME(‘sr-qm4-ibm-mq(1414)') CLUSTER(MYCLUSTER)”
"DEFINE CHANNEL(MYCLUSTER.QMFR1) CHLTYPE(CLUSRCVR) CONNAME(sr-qm3-ibm-mq(1414)') CLUSTER(MYCLUSTER)”
“DEFINE CHANNEL(MYCLUSTER.QMPR) CHLTYPE(CLUSSDR) CONNAME(‘sr-qm5-ibm-mq(1414)') CLUSTER(MYCLUSTER)”
Similarly, Repository and cluster sender/receiver channels will be applied or created on QMFR2 from corresponding MQ Pods .
Commands:
ALTER QMGR REPOS(MYCLUSTER)
DEFINE CHANNEL (MYCLUSTER.QMFR1) CHLTYPE(CLUSSDR) CONNAME ('sr-qm3-ibm-mq (1414)') CLUSTER(MYCLUSTER)
DEFINE CHANNEL (MYCLUSTER.QMFR2) CHLTYPE(CLUSRCVR) CONNAME ('sr-qm4-ibm-mq (1414)') CLUSTER(MYCLUSTER).
Similarly, cluster sender and receiver channels has been created on QMPR from corresponding MQ Pods .
Commands:
DEFINE CHANNEL (MYCLUSTER.QMFR1) CHLTYPE(CLUSSDR) CONNAME ('sr-qm3-ibm-mq (1414)') CLUSTER(MYCLUSTER)
DEFINE CHANNEL (MYCLUSTER.QMPR) CHLTYPE(CLUSRCVR) CONNAME ('sr-qm5-ibm-mq (1414)') CLUSTER(MYCLUSTER)
One Local queue as CLUSQ has been created on each QMGR (QMFR1, QMFR2 and QMPR) as cluster queue from their corresponding MQ Pods.
Command: “DEFINE QLOCAL(CLUSQ) CLUSTER(MYCLUSTER) CLWLUSEQ(ANY)”
And in this queue, set the property ‘Default bind type’ to ‘Not fixed’. This property will help to do workload balance inside the cluster queue Managers. You can do from MQ Console or MQ Commands from MQ Pods.
7 Client and Server Connection Channel Setup
For the MQ Client Applications connections to MQ server, we will create a new Client-Connection and Server-connection channel with same names, under each Queue manager. This will be used by CCDT. This CCDT will be used by MQ Client Applications to connect to MQ Server QMs.
7.1 Creation of Server Connection Channel
Run the below command from MQ Pod
Command:
DEFINE CHANNEL(QMFR1.APP.CHL) CHLTYPE(SVRCONN) TRPTYPE(TCP) MCAUSER('mqm')
The property MCA for this channel must be set to ‘mqm’.
7.2 Creation of Client Connection Channel
Run the below command for creating Client connection channel correspond to above server connection channel from corresponding MQ Pod.
Command:
DEFINE CHANNEL(QMFR1.APP.CHL) CHLTYPE(CLNTCONN) TRPTYPE(TCP) CONNAME (sr-qm3-ibm-mq(1414)) QMNAME(QMFR1) DESCR('Client-connection to QMFR1 Server')
Similarly, Client/Server channels created for other QMs: QMFR2 and QMPR with appropriate names as provided below the commands for same.
QMFR2 Commands:
DEFINE CHANNEL(QMFR2.APP.CHL) CHLTYPE(SVRCONN) TRPTYPE(TCP) MCAUSER('mqm')
DEFINE CHANNEL(QMFR2.APP.CHL) CHLTYPE(CLNTCONN) TRPTYPE(TCP) CONNAME (sr-qm4-ibm-mq (1414)) QMNAME(QMFR2) DESCR ('Client-connection to QMFR2 Server')
QMPR Commands:
DEFINE CHANNEL(QMPR.APP.CHL) CHLTYPE(SVRCONN) TRPTYPE(TCP) MCAUSER('mqm')
DEFINE CHANNEL(QMPR.APP.CHL) CHLTYPE(CLNTCONN) TRPTYPE(TCP) CONNAME (sr-qm5-ibm-mq(1414)) QMNAME(QMPR) DESCR('Client-connection to QMPR Server')
8 CCDT File Configuration
Host name is icp proxy url and Port is external mapped port against the each queue manager on 1414. Please refer the "oc get svc" command to get it. These Services are enabled for NodePort by CP4I itself for externally access these services from Cluster.
Create a CCDT (json format) file containing the client-connection channel names as created in section 7.2 , host and port details of each queue manager. There are two entry per Queue Manager, one with group name and aother with queue manager name. Cluster_CCDT.json
This CCDT placed on local machine to be used by MQ Client /Applications from Local environment to connect to Queue Manager on Cloud Pak for Integration.
9 MQ application set-up
To configure the MQ client application on local windows setup, we need to set the env variables for it.
- If running the MQ Client application from a local Windows system :
- Navigate to MQ_Installable path in command prompt
- Set MQCHLLIB= {path where you placed the CCDT json file}
- Set MQCHLTAB={CCDT json file name}.
- Set MQAPPLNAME=<application name>
10 Test Scenario Execution for MQ Cluster
Demonstrate the MQ cluster workload balancing
Use the ‘amqsphac.exe’ command to put messages on any one of the Cluster Queue Managers's local queue on CP4I.Currently these messages has been put on QMPR ClusQ, which required to be workload Balanced on each ClusQ among QMFR1 , QMFR2 and QMPR.
>amqsphac <Cluster queue> <Cluster Queue Manager>
Once 15 messages are sent, please go to console - MQ instances of all 3 QMGRs to check the queue depth in the ‘CLUSQ’ for each QMGR : QMFR1,QMFR2 and QMPR. You can find messages get distributed almost evenly across all three QMGRs.
QMFR1 - checked Queue Depth
QMFR2 - Checked the Queue Depth
QMPR - Checked the Queue Depth
This way, we realized the cluster workload balancing in the Cluster Queue managers. There has been few other test scenario has been ran while changing this sample utility for mq persistence message property to check the data loss while one of Queue Manager gets restarted while workload processing. You can also use their own MQ Client Applications or App connect flows with MQ Adapters to realize these behaviors.
11 Supporting Team
Thanks to Kiran Darbha , Matthew Whitehead, Arthur Barr and Callum Jackson for supporting to realize MQ Cluster on MQ containers using Cloud Pak for Integration .