MQ

MQ

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only
  • 1.  Limitations of MQ on containers

    Posted Sat September 21, 2024 07:58 AM

    We have been using MQ on containers primarily via the IBM Cloud Pak for Integration product where you deploy MQ using NativaHA on Openshift. About 4 years ago, we had a problem where ACE pods (again the same product above) connected to MQ and the QMGR would crash when "dis qmstatus conns" returned 600, we quickly fixed this problem by changing the KubeletConfig/podPidsLimit from the then default 1024 to 4096 cpu threads. Newer versions of Openshift now has 4096 as the default!

    Today we run the same QMGR where we have around 150+ ACE pods connecting to it, have around 1000+ queues defined, 4KB average message size with 40% of the messages being persistent and 100,000 messages being processed every hour. My question is should we be worried as more and more ACE pods connect to the same QMGR? Is there a system limit on MQ for queues and connections when running on containers? 

    Now comes the hard part that not many understand, I've tried to ask around but no clear answer yet. An MQ running on container in a pod on any Kubernetes platform is limited to just 1 process, which is probably why you can run 100's of pods on a given VM/worker node (On Openshift, the recommended maximum is 250 pods per worker). If you now compare this to MQ on a Linux VM where you could run for example., 100 processes instead of 1. I would then have to say that anything on containers in general has inferior performance on the context of the "process" in comparison to running on VMs, could someone shed some light here? Is there an IBM best practice guideline for MQ on Containers? Any input is much appreciated, thank you!



    ------------------------------
    Abu Davis
    ------------------------------


  • 2.  RE: Limitations of MQ on containers

    Posted Mon September 23, 2024 04:53 AM

    IBM MQ in containers has no additional limits.  The main difference between MQ in a Linux container and MQ in Linux, is that we usually use the "no-install" version of MQ, which prevents use of operating system users.  In every other way, it acts like MQ on Linux, because that's what it is.  The PID limit you mentioned is not coming from MQ, but from the container platform, which can optionally impose some additional limits (in the same way that a Linux administrator could).  There are however,  a number of general-purpose limits in MQ which you could hit, such as "MaxChannels".  See Planning scalability and performance for IBM MQ in containers for some more details.

    A container best practice is to limit a container to a single "concern", which often equates to a single process, but not always.  In IBM MQ's case, the queue manager continues just as it always has on Linux, which means running the multiple processes for that queue manager in a single container.  So again, nothing really has to change.  MQ in containers basically has identical performance to MQ on Linux (again, that's exactly what it is), but the container platform sometimes does add some extra layers, notably:

    • Container platforms such as Kubernetes use a software-defined networking stack.  This allows powerful capabilities like moving workloads between Nodes and retaining the same IP address, but does come at a performance cost.
    • Container platforms such as Kubernetes and other public cloud environments often drive the desire for network-attached storage.  Again, this allows the ability to move workloads between Nodes and additional resilience options, but can inject significant latency over what you'd see with a local disk.  You can trade this ability for one where the Pod is bound to a local volume with affinity rules to lock it to a particular Node, but most people don't, AFAIK.

    Finally, I'd say that containers and cloud environments do also offer a new opportunity: more queue managers.  Whereas we have commonly seen people running small numbers of large queue managers serving numerous applications, the tools that container environments offer can make it easier to have lots of smaller queue managers.  Techniques like declarative configuration (MQ autoconfig and the MQ Operator), and MQ Uniform Clusters help with this.  This approach can help side-step issues you might find with running particularly large queue managers.  It can also offer additional benefits, like limiting the impact of a single failure, and making it easier to upgrade or change a single queue manager without affecting other applications.



    ------------------------------
    Arthur Barr
    Container Architect, IBM MQ
    IBM
    ------------------------------



  • 3.  RE: Limitations of MQ on containers

    Posted Mon September 23, 2024 05:53 AM
    Edited by Abu Davis Mon September 23, 2024 05:53 AM

    @Arthur Barr Thank you for taking time to explain this!

    Could you expand a bit on your comment "A container best practice is to limit a container to a single "concern", which often equates to a single process, but not always"?



    ------------------------------
    Abu Davis
    ------------------------------



  • 4.  RE: Limitations of MQ on containers

    Posted Mon September 23, 2024 06:08 AM

    Containers are not VMs.  In order to take advantage of "containment", then you ideally want to break things down further than you might in a VM.  For example, you could theoretically you put all your applications and your queue manager and a database inside a single container.  This technically works, but now you can't separately limit resources (CPU, memory etc.), separate security concerns, upgrade software independently etc.  You're not really getting the advantages of containers.  Some early communication in the Docker community used to talk about this translating to a single process per container, but that's matured.  See https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container for a useful answer on this.

    Many years ago, IBM did consider trying to collapse IBM MQ into a single process for containers, however we decided not to do that, mainly because of performance and scalability concerns as you've highlighted – in Linux there are some per-process limits that would impact MQ if it only used one process.  As a result, MQ in a container remains unchanged, and still uses multiple processes.



    ------------------------------
    Arthur Barr
    Container Architect, IBM MQ
    IBM
    ------------------------------



  • 5.  RE: Limitations of MQ on containers

    Posted Mon September 23, 2024 11:18 AM

    @Arthur Barr Thanks again for the comment. If I understand you correct, do you mean to say an MQ on CP4I on Openshift starts up with 1 process and is capable of expanding to N processes based on workload? If thats the case, is it capable of using up all the available processes (how do I calculate that?) as an effect of high workload?



    ------------------------------
    Abu Davis
    ------------------------------



  • 6.  RE: Limitations of MQ on containers

    Posted Mon September 23, 2024 12:54 PM

    An MQ queue manager starts out as multiple processes, and MQ can run additional processes as necessary.  A detailed algorithm is not published, and it's a complex thing to predict accurately.

    As an example though, the MQ "amqrmppa" process contains a thread per connected client.  In Linux, threads are implemented as processes, so you can encounter limits imposed by the container platform or operating system, on the maximum number of processes. In Red Hat® OpenShift® Container Platform, there is a default limit of 4096 processes per container, which means a default limit of no more than 4096 connected clients (in practice, a few less).  It can get more complicated, and other factors can take effect (e.g. JMS clients will cause two threads to be created each).  As discussed in the documentation I linked earlier, you can increase the limit in OpenShift, and you can set limit the maximum number of connected clients to a queue manager, or via a specific channel.



    ------------------------------
    Arthur Barr
    Container Architect, IBM MQ
    IBM
    ------------------------------