MQ

 View Only
  • 1.  Operationally controlling Application data flowing across overlapping clusters.

    Posted Fri July 02, 2021 11:30 AM

    I have a use case that I'm trying to resolve, so asking for help and guidance of a best approach.


    I'm implementing a Uniform Cluster, not only for improved message service availability but to also eliminate an infrastructure single point of failure.
    Application data will flow between the UC and an MQ workload balanced Cluster of IIB hosts, via a pair of overlapping Gateway Qmgrs.

     

    The challenge as I see it is:

    To support the daily BAU release testing, the support team require the ability to halt the flow of Application test data / traffic between the source App and target IIB Qmgrs, (whilst still allowing the client application to write messages) also enabling the support teams to clear unwanted test data messages, from the Qmgrs before restarting the flow of "required" test messages again across the clusters.

     

    Our Current infrastructure uses distributed queuing between the application Qmgr and a single gateway Qmgr (entry into the MQ workload balanced IIB cluster). So the flow of data is easily managed via the sender channels and app message removal from the transmit queues.
     
    Within the new clustered design, we still need to ability to temporarily halt the traffic flow and delete volume generated test data without the risk of impacting cluster stability by inadvertent removal, of cluster management messages.
     
    Is there a way to simply, separate app data message traffic, from the inter qmgr cluster maintenance traffic, that would provide the means to control the flow of app messages, which would also give assurances that, that route for cluster transmit only handles application related messages.

     

    There are options for manually defining cluster XmitQ's with additional cluster channels and to use clustered remotes, but does this negate the workload balanced message routing across the gateway qmgr pair, by effectively defining a more static routing to specific Qmgrs.

     

    Many Thanks in advance
    Sean Adams



    ------------------------------
    SEAN ADAMS
    ------------------------------


  • 2.  RE: Operationally controlling Application data flowing across overlapping clusters.

    IBM Champion
    Posted Mon July 05, 2021 04:24 AM
    Hi Sean,

    as you say, halting the messages at the Gateway or UC QM and deleting them in a cluster is problematic, but perhaps there is an easier way if we restate the requirement.

    If instead of saying we need to block the messages, perhaps we can look at it as: we need to prevent the messages from reaching IIB.

    Let's assume (or propose if you don't currently have it) that you have a queue alias on each Gateway queue manager that is visible to the QMs in the uniform cluster via the second cluster (The UC/Gateway cluster). This qalias directs messages to the IIB queue managers via a third (IIB/Gateway) cluster.

    Instead of trying to block messages from reaching this, or blocking channels so the messages can't reach IIB, why not try:
    Create a TOPIC object (DUMMY) on the gateway queue managers.

    When you want to ignore incoming messages, alter the queue alias at the Gateway QMs to direct messages to topic DUMMY instead of the normal destination), Since there is no subscription on DUMMY, you don't have to clean up unwanted messages. MQ will just never save them.

    If you do sometimes want to review the messages, you can either subscribe and send them to a queue, or just create a local queue and direct the alias to that for the duration of the testing.

    When you want messages to be processed by IIB again, just change the alias back.

    Regards,

    Neil Casey.

    ------------------------------
    Neil Casey
    Senior Consultant
    Syntegrity Solutions
    Melbourne, Victoria
    IBM Champion (Cloud) 2019-21
    ------------------------------



  • 3.  RE: Operationally controlling Application data flowing across overlapping clusters.

    Posted Wed July 07, 2021 06:49 AM

    Hi Neil

     

    Thankyou for your response, 

    This is a great technique I'd not considered, to manage test data volumes in the performance test tier environments, enabling the immediate discard or retention of messages on "Holding" queues, depending on which option is required.

    Re-reading my original use case, this fits the description and will definitely be used going forward.

    However, I realise I mistakenly missed a point, to state that the ability to hold / control the data flow, would also be required periodically in a production environment, primarily to support full Application service outages, during some release activities. (but this more so effecting the inbound message routing from IIB to the UC Application Qmgrs).

     

    My Apologies, as I understand this significantly, changes my originally described requirements.

     

     

    Kind Regards

     

    Sean Adams

    Integration Specialist

     

      

    Integration Solutions Competency Centre
    Kingfisher IT Services | Southampton

    O2380 691758
    sean.adams@kingfisher.com

     [RED TEAM]

     

     

    ------------ Kingfisher plc Registered Office: 3 Sheldon Square, Paddington, London W2 6PX Registered in England, Number 1664812 This e-mail is only intended for the person(s) to whom it is addressed and may contain confidential information. Unless stated to the contrary, any opinions or comments are personal to the writer and do not represent the official view of the company. If you have received this e-mail in error, please notify us immediately by reply e-mail and then delete this message from your system. Please do not copy it or use it for any purpose, or disclose its contents to any other person. Thank you for your co-operation.





  • 4.  RE: Operationally controlling Application data flowing across overlapping clusters.

    IBM Champion
    Posted Mon July 05, 2021 05:18 AM
    Hi Sean,
    You mentioned the separation of cluster maintenance traffic and data traffic. That is easily achieved using 2 simple setup strategies
    In the qmgr change the default cluster xmitq to channel instead of SCTQ field (DEFCLXQ). On the other hand don't create any application queues on the full repositories.
    This will avoid mixing the traffic on the channel and thus on the xmitq.

    Enjoy

    ------------------------------
    Francois Brandelik
    ------------------------------



  • 5.  RE: Operationally controlling Application data flowing across overlapping clusters.

    Posted Wed July 07, 2021 09:27 AM

    Hi Francois

     

    Thank you for responding, your insight is much appreciated.

     

    I must admit that I have already set the Qmgr property for DEFCLXQ to CHANNEL on the new Uniform Cluster Queue Managers and the pair of Gateway Qmgrs. (Initially when considering TLS encryption between cluster Qmgrs).

    As you describe, this effectively supports the separation of application data traffic from cluster "maintenance" traffic,  (only within our IIB Workload balanced cluster), as the cluster already runs with 2 specific Full Repos, which do NOT host application related definitions.

    Within a "normal" overlapping cluster setup, this would work, as you describe.

    I do think that the Uniform Cluster setup, adds an additional layer of complexity to this, as the first 2 qmgrs in the UC are defined as Full Repos (that host application queues), with any additional Qmgrs being defined as Partial Repos.

    Initially I thought maybe I could designate the first 2 Qmgrs in the UC as Full Repos and just NOT describe them within the CCDT so they would not be referrable and remove them for client connections.

     

    It would need to be confirmed, but I think there are at least a couple of technical reasons that prevent the UC from running with Full Repos that do not host application queues.

    ·                     Expectation is that the number of client (AppID) connections should not be less than the number of available Qmgrs.

    ·                     All UC Qmgrs are included, within the logic used in monitoring client connections and AppID balanced status.

    ·                     All Qmgrs are used when providing "hints" to clients, of an available qmgr to reconnect to, so if full repos do not have connected clients, load balancing would not work.

    ·                     The Central Configuration features, where .mqsc scripts are applied to all UC Qmgrs on startup, would mean the full repos, would host clustered alias queues (targets for Inbound message delivery From IIB) and would have no connected clients to consume them.

     

     

    I don't know if there are options around this Or whether I have just placed to high an emphasis on the separation of App Data from cluster maintenance traffic and that the frequency of cluster maintenance communication would be so low as not to really be of concern and would be auto "retried" by the Cluster Qmgrs and therefore automatically recovered, before it would ever become an issue.

     

     

     

    Kind Regards

     

    Sean Adams

    Integration Specialist

     

      

    Integration Solutions Competency Centre
    Kingfisher IT Services | Southampton

    O2380 691758
    sean.adams@kingfisher.com

     [RED TEAM]

     

     

    ------------ Kingfisher plc Registered Office: 3 Sheldon Square, Paddington, London W2 6PX Registered in England, Number 1664812 This e-mail is only intended for the person(s) to whom it is addressed and may contain confidential information. Unless stated to the contrary, any opinions or comments are personal to the writer and do not represent the official view of the company. If you have received this e-mail in error, please notify us immediately by reply e-mail and then delete this message from your system. Please do not copy it or use it for any purpose, or disclose its contents to any other person. Thank you for your co-operation.





  • 6.  RE: Operationally controlling Application data flowing across overlapping clusters.

    IBM Champion
    Posted Wed July 07, 2021 10:00 AM

    "It would need to be confirmed, but I think there are at least a couple of technical reasons that prevent the UC from running with Full Repos that do not host application queues.

    ·                     Expectation is that the number of client (AppID) connections should not be less than the number of available Qmgrs.

    ·                     All UC Qmgrs are included, within the logic used in monitoring client connections and AppID balanced status.

    ·                     All Qmgrs are used when providing "hints" to clients, of an available qmgr to reconnect to, so if full repos do not have connected clients, load balancing would not work.

    ·                     The Central Configuration features, where .mqsc scripts are applied to all UC Qmgrs on startup, would mean the full repos, would host clustered alias queues (targets for Inbound message delivery From IIB) and would have no connected clients to consume them."

    Have you tried it? It is simple and works
    • In your CCDT do not include the full repos for the qmgr config for the consumers / producers
    • The premise is that you should have about 2 times the consumers / producers compared to the queue managers hosting the queues so that you have 2 consumers/producers per queue. Works fine with the UC load balancing.
    • UC load balancing hints are taken from the list of qmgrs hosting the queue and the list of allowable qmgrs i.e. accessible through *qmgr from the client so you can create a UC cluster where the FR have no application queues.

    Have fun

     



    ------------------------------
    Francois Brandelik
    ------------------------------