Right at the beginning of the year I wrote a blog about use cases for MQ in z/OS Container Extensions. I covered three different use cases, but perhaps the most interesting one was to do with client concentrators.
I want to use this blog to dig into client concentrators a bit more, the problems they solve, and some other points that need to be taken into consideration before deciding to use them.
Connecting into an MQ for z/OS queue manager
Traditionally most apps that used MQ for z/OS ran on z/OS and connected directly to queue managers running on the same LPAR using shared memory – this is known as bindings mode. However, for a long time now I have seen more and more apps running on distributed platforms need to connect into MQ for z/OS. Recently that trend has accelerated as many customers want to be able to exploit the unparalleled resilience that MQ for z/OS queue sharing groups provide.
In order to connect an app running on distributed to an MQ for z/OS queue manager, you need to connect over the network, and you use a server-conn channel to do this. This is shown in the diagram below.
Obviously as more apps connect into a queue manager, it must do more work on their behalf, and therefore use more CPU. A particularly expensive operation is the initial connect to the queue manager (MQCONN) which will involve a TLS handshake, plus the security checks performed by the queue manager, and other things.
If you have a large number of apps, connecting to a queue manager, sending or receiving one message, and then disconnecting again this can result in very high CPU usage indeed. Similarly, apps that sit and poll the queue manager for messages can also result in high CPU usage.
z/OS customers are typically very interested in how much CPU they are using and try to reduce it as much as possible. In many cases this can be done with appropriate app design, but sometimes that isn’t enough, which is where client concentrators come in.
What’s a client concentrator?
A client concentrator is a distributed queue manager that acts as a proxy for the z/OS queue manager. The application connects to the distributed queue manager over a server-conn channel and sends and receives messages, the distributed queue manager moves these messages to and from the z/OS queue manager over a small number of sender-receiver channels.
In this way the distributed queue manager concentrates a large number of server-conn channels into a small number of sender-receiver channels, hence client concentrator. Sender receiver channels are normally long lived and so typically cheaper than using server-conn channels. An environment using a client concentrator is shown below.
Does it really make that big a difference?
Yes, using a client concentrator can result in a significant CPU reduction. Here is an example chart showing a badly behaved app that does an MQCONN, MQPUT, MQGET and then MQDISC. You can see that in this case server-conn channels are 9 times as expensive as the client concentrator model because of the cost of the MQCONN.
The previous chart showed a worse case scenario. The following chart shows that for apps that have longer lived connections to MQ the CPU difference is much less. In this case the app does MQCONN, 100,000 * (MQPUT, MQGET) and then MQDISC. The per transaction cost is calculated by working out the total qmgr CPU cost for the app and dividing it by 100,000. This means that the cost of the MQCONN is amortized.
These charts show data from performance tests that were run in Hursley. I have simplified them for this blog. Full details of the tests, and more charts, can be found in chapter 8 of support pack MP16.
Should I always use a client concentrator?
Looking at the charts I just showed you would be forgiven for thinking that you should always use a client concentrator to connect apps to a z/OS queue manager. However, those charts only focus on the CPU usage of the z/OS queue manager, there are lots of other things to take in to consideration.
1) Complexity: there is no doubt that the client concentrator model is more complicated, you have just gone from having one queue manager to two! Furthermore, the different queue managers run on very different platforms which are typically managed by different teams
2) Latency: client concentrators add latency. If the app is sending a message and waiting for a response to it, each message now has two network hops, and must be processed by two queue managers. Messages sent over sender-receiver channels are often batched up which will add extra latency too. If the messages are persistent, they must be logged on each queue manager. Whether latency is a problem or not depends on your app’s needs
3) High availability: having just one client concentrator queue manager is a single point of failure. This is becoming less, and less, acceptable in today’s world. There are lots of options for making distributed MQ highly available, but they all increase complexity. Even worse, the reason many distributed apps make use of MQ for z/OS is to take advantage of queue sharing groups, so putting a less available client concentrator in the picture is a bad idea! I would suggest avoiding client concentrators if you want to connect to a queue sharing group
4) Security: the client concentrator model is secure, but it requires more effort to set up. You will need to use TLS on all the channels and do the necessary certificate management. However, this effort tends to be significantly less than required for the security configuration of the server-conn channels
5) Costs: the client concentrator model requires one, or more, distributed MQ licences plus the need for somewhere to run the distributed queue managers
So, you can see there are several things to be balanced against the increased CPU cost of connecting apps directly to MQ for z/OS. Exactly which approach you choose is going to be very dependent on your specific requirements.
Certainly, a lot of customers today make use of client concentrators, but not all of them, and I have seen a high percentage of new app deployments connect directly to MQ for z/OS, especially if queue sharing groups are being used.
Where should the client concentrator queue manager go?
Any distributed queue manager can be a client concentrator. I know of clients using AIX, Linux or MQ Appliance queue managers for this purpose. However, there is value in having dedicated client concentrator queue managers and keeping them as close to the z/OS queue managers as possible to minimise network latency.
A common choice is to put the client concentrator on zLinux, ideally on the same physical hardware as the MQ for z/OS queue manager. This is shown in the diagram below and has the benefit that the connection between the queue managers can be made over a fast in memory network with very low latency.
Now not every z/OS customer has a zLinux installation, which takes us full circle back to z/OS Container Extensions (zCX). zCX allows you to run any zLinux application , such as distributed MQ, in a container inside a z/OS LPAR. Critically for the client concentrator scenario the zCX container is on the same LPAR as the MQ for z/OS queue manager. Allowing these two related components to run closely together, and be managed by the the same sysprogs, while benefiting from a fast in memory network. This is shown in the diagram below.
In some cases, the CPU cost associated with connecting large numbers of distributed apps directly to MQ for z/OS can be a concern. Often these can be solved by app optimization, if not a client concentrator queue manager might be an option. However, client concentrators do come with downsides and aren’t always the right answer.
If you do decide to go with a client concentrator queue manager, running them on an IBM Z Server is often a good idea to minimize network latency. The recent zCX functionality is a great way of doing this.
If you would like to find out about other scenarios where zCX makes sense with MQ, take a look at the blog I mentioned earlier. For more information about zCX see here.
If you have any questions on this topic, or anything else related to MQ on z/OS, drop me an email at email@example.com.#MQ#IBMMQ