MQ

 View Only
Expand all | Collapse all

Balancing messages in a uniform cluster based on queue depth

  • 1.  Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Mon January 03, 2022 02:28 PM
    Using the MQ uniform cluster (UC) pattern I'm thinking through how to best handle the following scenario.
    • We have a UC where there are multiple (># of cluster members) receiving client applications that remain connected, waiting to receive messages from a queue.  The cluster will automatically handle balancing these applications in a way that balances the connections to each QM.
    • The message sender is a batch process that wakes up periodically, connects to the cluster and sends a burst of messages.  It is a single instance that connects to MQ when it wakes and then disconnects once its work is done.
    • Each message is independent of the rest so it doesn't matter which receiver process a message, or in what order they get handled.  The message pattern is point-to-point.  No reply needed.
    The challenge is that one of the queues will receive all the messages when the sender connects and the rest will remain empty.  It would be preferable for the messages to be distributed across the queues in the UC. 

    I found the cluster queue monitoring sample program (AMQSCLM) that detects and transfers messages whenever no client instance is connected to a queue.  It seems this could be modified to do the same action based on queue depth as well.  When the queue depth exceeds a threshold the program transfers some messages to the other queues in the cluster. 

    • Is this the right approach, or is there a better way to handle this in a UC?
    • Should my version of AMQSCLM be run on each of the MQ servers?  Or, is it better to run this in a separate container (Kubernetes)?
    • Any issues with modifying/extending AMQSCLM (e.g., licensing) for this purpose?

    Thanks,
    Jim

    ------------------------------
    Jim Creasman
    ------------------------------


  • 2.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Mon January 03, 2022 06:09 PM
    Solve this on the sending side as the messages are produced.

    The message sender will connect to a queue manager in the cluster. It does not connect to the cluster.

    If that queue manager does not contain an instance of the clustered destination queue, then all the messages should load balance across the instances of queues on the other queue managers. Make sure the sender app specifies MQOO_BIND_NOT_FIXED on the MQOPEN of the queue to allow that.
    https://www.ibm.com/docs/en/ibm-mq/9.2?topic=calls-mqopen-open-object

    If the queue manager the sender connects to does have an instance of the queue, then in addition to the MQOO_BIND_NOT_FIXED open on the MQOPEN you will also need to set the CLWLUSEQ attribute of the queue to ANY.
    https://www.ibm.com/docs/en/ibm-mq/9.2?topic=clusters-clwluseq-queue-attribute


    ------------------------------
    Peter Potkay
    ------------------------------



  • 3.  RE: Balancing messages in a uniform cluster based on queue depth

    Posted Tue January 04, 2022 03:35 AM
    Hi Jim

    I think Peter's answer is spot on - there is no need to involve amqsclm here if all you need is to spread the message load at put time.  A Uniform Cluster is just a 'specialisation' of an MQ cluster, so all traditional load balancing techniques are available to you.

    Since you are strongly encouraged to keep Uniform Cluster queue managers truly 'uniform', you would expect there to be an instance of the queue on all queue managers, therefore you will need to set CLWLUSEQ to avoid the default 'prefer local' behaviour.

    amqsclm would come in to play if you are concerned that there will be periods when messages cannot be processed on particular nodes in the UC - these would typically be more 'advanced' scenarios since usually you would expect to have at least as many receiving apps as queue managers in the cluster, and let MQ balance the connections.  IF you had such a situation for some reason, amqsclm can be used to 'move on' messages to somewhere a getting app is available.

    Thanks,
    Anthony

    ------------------------------
    Anthony Beardsmore
    IBM MQ Development
    IBM
    ------------------------------



  • 4.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Tue January 04, 2022 11:43 AM
    Thanks, guys.  Seems I need to take a closer look at the bind and CLWLUSEQ options.  When I first read through these I'd assumed these applied at application connection time, and were "fixed" at the time a message was sent.  I'll experiment with these to see how they work.  In the mean time I have a few follow up questions.

    Let's say I have a uniform cluster where QM1 & QM2 are the queue managers.  Both have a local queue defined, called MYQ.V1 with CLWLUSEQ(ANY) specified.  There are multiple instances of a receiver client connected to the queue managers such that half are connected to QM1 and the rest are connected to QM2, all waiting on messages to arrive.

    At this time a sender client connects using CCDT with "*QM_ANY" and with MQOO_BIND_NOT_FIXED specified at the time the MQOPEN is executed.  It will connect to one of the QM's.  This client proceeds to send 10,000 messages to MYQ.V1.  The receiving clients are slower and take more time to process a message as compared to the send rate. 

    Should I expect to see roughly half the messages going to each of the two queues?  

    What algorithm does MQ use to distribute the message load?  I've been reading through the docs at https://www.ibm.com/docs/en/ibm-mq/9.2?topic=clusters-cluster-workload-management-algorithm.  There are a lot of options to consider, and I don't see any mention of the current queue depth being a factor.  However, with all things being equal I believe this statement implies a round-robin approach is used:

    After the list of valid destinations has been calculated, messages are workload balanced across them, using the following logic:
    • When more than one remote instance of a destination remains and all channels to that destination have CLWLWGHT set to the default setting of 50, the least recently used channel is chosen. This approximately equates to a round-robin style of workload balancing when multiple remote instances exist.
    Seems like this is the behavior I want, assuming my understanding is correct.

    Jim

    ------------------------------
    Jim Creasman
    ------------------------------



  • 5.  RE: Balancing messages in a uniform cluster based on queue depth

    Posted Tue January 04, 2022 11:59 AM

    Yes, I think you've found the right sources to be looking at and your high level understanding is correct.  For a clustered queue the 'destination binding' is never fixed at the connect time - it will be at OPEN, PUT or (less commonly) between message groups.

    As you've noted, there is not (currently) any mechanism to use target queue depth as a factor in the balancing algorithm no, only information readily available at the sending queue manager side is considered.  This has occasionally been mooted as a product enhancement in the past but is not one which has received a great deal of support/pressure from the community so hasn't risen to the 'top of the pile' to date.

    The only other thing you may have missed which is important to be aware of is that the balancing is not 'per queue' but 'per channel' (or effectively 'per queue manager').  So if the scenario was exactly as described, with no other queues in the mix, then yes you would expect to see (more or less) exactly 50:50 balancing.  However, once other queues and applications are involved (some of which might have different balancing requirements, such as using BIND_ON_OPEN), then you might get slightly different results.  In practice, in most real world scenarios you probably wouldn't notice the difference but it does sometimes catch people out.



    ------------------------------
    Anthony Beardsmore
    IBM MQ Development
    IBM
    ------------------------------



  • 6.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Tue January 04, 2022 03:10 PM
    I'm not seeing the expected results.  I need some help to figure out what I'm doing wrong.  This is what I've tried so far:

    Defined a uniform cluster with the two qmgrs set as QC90 and QC91.  Identical commands are run on each so they are uniform (other than qmgr names).

    I've defined channel CLNT.QC9 for the clients to use when connecting.  The queue name is MQDEV.QUEUE.V1.
    *
    * Channel clients use to connect.
    DEFINE CHANNEL('CLNT.QC9') CHLTYPE(SVRCONN) SHARECNV(1) REPLACE

    SET CHLAUTH('CLNT.QC9') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED) DESCR('Allows connection via APP channel') ACTION(REPLACE)
    *
    * Local queue definitions:

    DEFINE QLOCAL('MQDEV.QUEUE.V1') CLWLUSEQ(ANY) REPLACE

    Both qmgrs are running in separate Docker containers.  I've verified that these are communicating over the sdr/rcvr channels.  For example, from QC91:

         8 : dis chstatus(D*) 
    AMQ8417I: Display Channel Status details.
       CHANNEL(DEVUC_QC91)                     CHLTYPE(CLUSRCVR)
       CONNAME(172.19.0.4)                     CURRENT
       RQMNAME(QC90)                           STATUS(RUNNING)
       SUBSTATE(RECEIVE)                  
    AMQ8417I: Display Channel Status details.
       CHANNEL(DEVUC_QC90)                     CHLTYPE(CLUSSDR)
       CONNAME(172.19.0.4(9414))               CURRENT
       RQMNAME(QC90)                           STATUS(RUNNING)
       SUBSTATE(MQGET)                         XMITQ(SYSTEM.CLUSTER.TRANSMIT.QUEUE)
    I haven't connected any receiving clients since I want to examine the queue depth at the end of a series of PUTs.  My PUT client is NodeJS-based (mq-mqi-nodejs).  When I run the client I'm specifying these options on the MQOPEN:

     let od = new mq.MQOD();
    od.ObjectName = queueName;
    od.ObjectType = MQC.MQOT_Q;
    let openOptions = MQC.MQOO_OUTPUT | MQC.MQOO_BIND_NOT_FIXED;

    mq.Open(hConn, od, openOptions, function (err, hObj) {
    ...
    }
    ... and then MQPUT is called as follows:
     let mqmd = new mq.MQMD(); 
    mqmd.Persistence = false;
    mqmd.PutApplName = pgmName;

    let pmo = new mq.MQPMO();
    pmo.Options = MQC.MQPMO_NO_SYNCPOINT |
    MQC.MQPMO_NEW_MSG_ID |
    MQC.MQPMO_NEW_CORREL_ID;
    mq.PutSync(hObj, mqmd, pmo, msg, function (err) {
    ...
    }
    When I run the put command and have it queue up some number of messages I see all of these are going to the queue defined on QC90 and none ever go to QC91.  Eventually, QC90's queue will hit the max depth of 5000 and MQPUT begins to fail with "queue is full" message.  

    What am I missing?

    Thanks,
    Jim

    ​​

    ------------------------------
    Jim Creasman
    ------------------------------



  • 7.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Tue January 04, 2022 03:48 PM
    Got it!  @Josh McIver pointed out that my queue was missing the CLUSTER option.  Once I added that it now works as expected.  I sent 1000 messages in to the system and now see 500 sitting on each queue.

    DEFINE QLOCAL('MQDEV.QUEUE.V1') CLWLUSEQ(ANY) CLUSTER(DEVUC) REPLACE

    Thanks, Josh!

    ------------------------------
    Jim Creasman
    ------------------------------



  • 8.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Fri January 07, 2022 03:53 PM
    Now that I have a clustered queue defined properly I've been playing around with some different scenarios involving remote queue definitions.  In the first scenario I have a the local queue defined on a non-clustered qmgr and the remote queue definitions on the clustered qmgrs.  A picture is the best way to describe.


    When my sending client sends a batch of messages and BIND_NOT_FIXED is used, do half of these go through QC90 and the other half go through QC91?  Regardless, all messages are forwarded to the local queue on QS90 as expected.  I'm just curious as to whether BIND_NOT_FIXED applies in this case.

    My next question is, can I flip this script as in the following diagram?  This time the local queue definition is in the cluster (QC90 & QC91).  The client is sending to a queue that is defined as remote on qmgr, QS90.  If the client were connected directly to a cluster qmgr and PUT the messages with BIND_NOT_FIXED then the messages would be divided between the two queues.  Is there an option similar to BIND_NOT_FIXED that can be specified when the remote queue on QS90 is created such that it acts in the same way?  In other words, is it possible for the client to put 100 messages to the remote queue and have 50 end up on each of the two clustered queues?


    Thanks,
    Jim

    ------------------------------
    Jim Creasman
    ------------------------------



  • 9.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Sun January 09, 2022 04:13 AM

    On your first scenario, if half your messages were to go via QC90 and the other half via QC91, it wouldn't be along the lines you have drawn on your picture. Your "sending client" application is, I assume, making a connection to one of QC90 or QC91 - that's the dotted line in your picture. Then, having connected to a queue manager, let's assume QC90, the application proceeds to put messages. These puts will either resolve to a local instance of a clustered queue or a remote instance of a cluster queue, in the latter case they land up on the SYSTEM.CLUSTER.TRANSMIT.QUEUE ready to be delivered to another queue manager. As you learned in earlier responses to this thread, this depends on the queue being clustered (i.e. having a non-blank value in the CLUSTER keyword) and how you have the CLWLUSEQ attribute set. The same is true for this scenario, except that, the use case for sending half of the messages from QC90 to QC91 in order to then have them be routed to QS90 seems less clear. This is a case where CLWLUSEQ(LOCAL) would be far more appropriate don't you think?

    Your second scenario is possible, but only if QS90 is also in the cluster with QC90 and QC91. Because QS90 is not in the cluster, it does not have access to the knowledge of where the clustered queues reside and so cannot route messages to all hosted queues because it does not know them. If it was in the cluster, the remote queue definition on QS90 could leave the RQMNAME attribute blank, and messages put using that remote queue definition would act in the same way as if you had done a put to queue name on blank qmgr - that allows the cluster choosing algorithm to get in there and pick. I think you'd have to change the DEFBIND option on the remote queue definition if the application was connecting to a queue manager outside of the cluster as it's MQOO option wouldn't carry through.

    Are you just trying out different scenarios as a learning exercise, or do you have a problem to solve that we could help with?

    Cheers,
    Morag



    ------------------------------
    Morag Hughson
    MQ Technical Education Specialist
    MQGem Software Limited
    Website: https://www.mqgem.com
    ------------------------------



  • 10.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Mon January 10, 2022 08:25 AM
    Edited by Francois Brandelik Mon January 10, 2022 08:26 AM
    Hi Jim,
    In order for your messages to balance as desired when sending from QS90, you will need to put a cluster alias on the remote queue on QS90 and  define that cluster alias on QC90 and QC91, or only one of them if clustering the cluster alias. Obviously the cluster alias (if clustered) should have bind not fixed or bind group.

    Hope that makes sense to you.

    ------------------------------
    Francois Brandelik
    ------------------------------



  • 11.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Mon January 10, 2022 08:03 PM
    "In the first scenario I have a the local queue defined on a non-clustered qmgr and the remote queue definitions on the clustered qmgrs."
    The queues are defined on QC90 and QC91 and are of the type "QREMOTE"?
    If yes, this changes the answers you are getting.
    Or are they of type QALIAS?
    Or is no queue actually defined on QC90 and QC91 and you are relying on MQ clustering being aware of the clustered instance of the queue on QS90?

    "When my sending client sends a batch of messages and BIND_NOT_FIXED is used, do half of these go through QC90 and the other half go through QC91?"
    Remember, an app does not connect to a cluster, it connects to a queue manager. Client apps can concurrently make one or more connections to one or more queue managers. It is then up to the app to decide how many messages it sends over which connections.

    ------------------------------
    Peter Potkay
    ------------------------------



  • 12.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Thu January 13, 2022 02:26 PM
    I have the second scenario working now using the following definitions:

    QS90 (single MQ server):

    DEFINE QREMOTE('MQDEV.QUEUE.V1') RNAME('MQDEV.QUEUE.V1') RQMNAME('DEVUC.RQMA') XMITQ('QC90') REPLACE

    QC90 (clustered):

    DEFINE QLOCAL('MQDEV.QUEUE.V1') CLWLUSEQ(ANY) CLUSTER(DEVUC) DEFBIND(NOTFIXED) REPLACE

    DEFINE QREMOTE('DEVUC.RQMA') RNAME('') RQMNAME('') XMITQ('') REPLACE

    QC91 (clustered):

    DEFINE QLOCAL('MQDEV.QUEUE.V1') CLWLUSEQ(ANY) CLUSTER(DEVUC) DEFBIND(NOTFIXED) REPLACE

    Now, when I put 10 messages to MQEV.QUEUE.V1 on QS90 I see 5 of these show up on QC90 and the other 5 on QC91.  QED

    Thanks everyone for your help.

    Jim

    ------------------------------
    Jim Creasman
    ------------------------------



  • 13.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Fri January 14, 2022 03:03 AM

    With those definitions, and QS90 not in the cluster, all the messages are going via QC90. Some will then make a second hop to QC91. Just checking that was what you wanted?

    Cheers,
    Morag



    ------------------------------
    Morag Hughson
    MQ Technical Education Specialist
    MQGem Software Limited
    Website: https://www.mqgem.com
    ------------------------------



  • 14.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Fri January 14, 2022 09:35 AM
    Yes, an extra hop is fine in this case.  Within the uniform cluster there will be multiple instances of the consumers, each potentially connected to the different instances of the queue.  My goal is to make sure that all of these have work to do and none are sitting idle because all messages are on the QC90 queue.

    ------------------------------
    Jim Creasman
    ------------------------------



  • 15.  RE: Balancing messages in a uniform cluster based on queue depth

    IBM Champion
    Posted Mon January 10, 2022 01:37 PM
    Thanks, Morag.  This is mainly a learning exercise so I know what to expect.  I don't have a specific problem in hand.  In this particular diagram QS90 is our z/OS MQ that is non-clustered and single instance.  The others (QC90 & QC91) are on the distributed side (Kubernetes) and form of a uniform cluster.  

    Jim

    ------------------------------
    Jim Creasman
    ------------------------------