I'm not seeing the expected results. I need some help to figure out what I'm doing wrong. This is what I've tried so far:
Defined a uniform cluster with the two qmgrs set as QC90 and QC91. Identical commands are run on each so they are uniform (other than qmgr names).
I've defined channel CLNT.QC9 for the clients to use when connecting. The queue name is MQDEV.QUEUE.V1.
*
* Channel clients use to connect.
DEFINE CHANNEL('CLNT.QC9') CHLTYPE(SVRCONN) SHARECNV(1) REPLACE
SET CHLAUTH('CLNT.QC9') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED) DESCR('Allows connection via APP channel') ACTION(REPLACE)
*
* Local queue definitions:
DEFINE QLOCAL('MQDEV.QUEUE.V1') CLWLUSEQ(ANY) REPLACE
Both qmgrs are running in separate Docker containers. I've verified that these are communicating over the sdr/rcvr channels. For example, from QC91:
8 : dis chstatus(D*)
AMQ8417I: Display Channel Status details.
CHANNEL(DEVUC_QC91) CHLTYPE(CLUSRCVR)
CONNAME(172.19.0.4) CURRENT
RQMNAME(QC90) STATUS(RUNNING)
SUBSTATE(RECEIVE)
AMQ8417I: Display Channel Status details.
CHANNEL(DEVUC_QC90) CHLTYPE(CLUSSDR)
CONNAME(172.19.0.4(9414)) CURRENT
RQMNAME(QC90) STATUS(RUNNING)
SUBSTATE(MQGET) XMITQ(SYSTEM.CLUSTER.TRANSMIT.QUEUE)
I haven't connected any receiving clients since I want to examine the queue depth at the end of a series of PUTs. My PUT client is NodeJS-based (mq-mqi-nodejs). When I run the client I'm specifying these options on the MQOPEN:
let od = new mq.MQOD();
od.ObjectName = queueName;
od.ObjectType = MQC.MQOT_Q;
let openOptions = MQC.MQOO_OUTPUT | MQC.MQOO_BIND_NOT_FIXED;
mq.Open(hConn, od, openOptions, function (err, hObj) {
...
}
... and then MQPUT is called as follows:
let mqmd = new mq.MQMD();
mqmd.Persistence = false;
mqmd.PutApplName = pgmName;
let pmo = new mq.MQPMO();
pmo.Options = MQC.MQPMO_NO_SYNCPOINT |
MQC.MQPMO_NEW_MSG_ID |
MQC.MQPMO_NEW_CORREL_ID;
mq.PutSync(hObj, mqmd, pmo, msg, function (err) {
...
}
When I run the put command and have it queue up some number of messages I see all of these are going to the queue defined on QC90 and none ever go to QC91. Eventually, QC90's queue will hit the max depth of 5000 and MQPUT begins to fail with "queue is full" message.
What am I missing?
Thanks,
Jim
------------------------------
Jim Creasman
------------------------------
Original Message:
Sent: Mon January 03, 2022 02:27 PM
From: Jim Creasman
Subject: Balancing messages in a uniform cluster based on queue depth
Using the MQ uniform cluster (UC) pattern I'm thinking through how to best handle the following scenario.
- We have a UC where there are multiple (># of cluster members) receiving client applications that remain connected, waiting to receive messages from a queue. The cluster will automatically handle balancing these applications in a way that balances the connections to each QM.
- The message sender is a batch process that wakes up periodically, connects to the cluster and sends a burst of messages. It is a single instance that connects to MQ when it wakes and then disconnects once its work is done.
- Each message is independent of the rest so it doesn't matter which receiver process a message, or in what order they get handled. The message pattern is point-to-point. No reply needed.
The challenge is that one of the queues will receive all the messages when the sender connects and the rest will remain empty. It would be preferable for the messages to be distributed across the queues in the UC.
I found the cluster queue monitoring sample program (AMQSCLM) that detects and transfers messages whenever no client instance is connected to a queue. It seems this could be modified to do the same action based on queue depth as well. When the queue depth exceeds a threshold the program transfers some messages to the other queues in the cluster.
- Is this the right approach, or is there a better way to handle this in a UC?
- Should my version of AMQSCLM be run on each of the MQ servers? Or, is it better to run this in a separate container (Kubernetes)?
- Any issues with modifying/extending AMQSCLM (e.g., licensing) for this purpose?
Thanks,
Jim
------------------------------
Jim Creasman
------------------------------