MQ

 View Only

Use cases for MQ in z/OS Container Extensions

By MATTHEW LEMING posted Thu January 30, 2020 04:52 AM

  

z/OS 2.4 introduced z/OS Container Extensions (zCX). zCX allows any zLinux app to run in a Docker container on a z/OS LPAR next to existing z/OS apps. zCX brings many benefits - but is particularly useful if you just need to run a small number of zLinux apps that need to connect into your z/OS apps: there is no need for a dedicated zLinux environment, or IFLs. zCX containers can run on zIIPs making them attractive from a pricing perspective.

This Redbook describes zCX in detail, and gives configuration examples. As MQ for distributed can run on zLinux the Redbook also walks through installing MQ in a zCX container and connecting it to a MQ for z/OS queue manager.

Software running in a zCX container can only communicate with z/OS software, or software in another zCX container, using TCP/IP. However zCX containers can fully exploit HiperSockets which allows very fast TCP/IP connectivity to any software running on the same server. In particular MQ apps running in environments like CICS or IMS can't directly connect to a zLinux queue manager running in zCX; instead they would have to connect to a local MQ for z/OS queue manager first.

The Redbook doesn't really describe the use-cases where MQ in zCX connecting to MQ on z/OS makes sense, which is where this blog comes in. There are currently two main use-cases where we think use of MQ in zCX adds value:

1) As a client concentrator
2) As a cluster full repository

Client concentrator

Let's explore the client concentrator example first. We are seeing a growing number of customers connecting remote client apps directly into MQ on z/OS queue managers. Client apps connecting into MQ on z/OS, especially badly behaved apps that connect, and disconnect, a lot can result in the chinit address space using a relatively large amount of CPU. For some customers that CPU use is prohibitive and so they make use of a client concentrator. When using a client concentrator, client connections are made to a proxy distributed queue manager which then routes messages to and from one or more z/OS queue managers over sender/receiver channels.

Use of a client concentrator reduces chinit CPU usage as sender/receiver channels are less CPU intensive than the server-conn channels used by directly connected client apps. However client concentrators introduce latency and complexity meaning they are not suitable in all cases.

The figure below shows the difference between a client app connecting directly into MQ on z/OS and a client concentrator being used.




Different customers choose different form factors for their client concentrator queue manager. Some choose the MQ appliance, because of its easy to install form factor and built-in HA. Others choose MQ on zLinux as it is runs on the same hardware as the target MQ for z/OS queue manager and can also exploit HiperSockets for high speed communication between the queue managers.

Now that zCX is available we believe this is a natural choice as a location for running a client concentrator queue manager: there is no need to set up a full zLinux environment or provision other hardware. Instead run the concentrator on the same z/OS LPAR as the target queue manager as shown in the figure below.




Cluster full repository

Best practice for deploying an MQ cluster is to have two dedicated full-repositories which aren't used for application workload. In clusters which span both z/OS and distributed queue managers these full repositories are typically placed on distributed queue managers to reduce chinit CPU costs. Some customers who have clusters that only consist of z/OS queue managers also deploy their full repositories on distributed MQ - complicating their topology. zCX provides the option to run these full repositories to MQ on zLinux queue managers running on the same hardware as the z/OS queue managers that host the cluster queues, and again exploit HiperSockets for a high speed communications link.

A fully MQ on z/OS cluster with full-repositories running in zCX containers is shown in the figure below.




Over time, as both MQ and zCX evolves we expect that there will be other valid use cases. As these become available they will be added to this blog.

If you have any questions, or comments on this blog, or indeed on anything to do with MQ on z/OS please reach out to me at lemingma@uk.ibm.com.

1 comment
71 views

Permalink

Comments

Tue February 11, 2020 10:41 PM

Matthew, a couple quick comments from me as elaborations:
1. The z/OS Container Extensions facilitates TCP/IP-based communications with other z/OS hosted resources within the same LPAR (or z/OS guest on z/VM). There's not even any need for a HiperSocket connection in that case -- it's internal to the LPAR, and much more efficient than HiperSockets. Moreover, SMC-D is now the preferred/recommended communications path between z/OS LPARs. (HiperSockets only serve a "backstop" role with SMC-D.) So, in short, I don't think HiperSockets are particularly relevant to this set of use cases. Either you're going to be communicating within the LPAR or, in those cases when you have to cross LPARs on the same machine, SMC-D.

2. For the class of client-related problems you describe, MQTT is likely to be the better protocol choice. So I suggest illustrating use of the z/OS Container Extensions as a MQTT "channel concentrator." MQ for z/OS does not presently support MQTT protocol directly, so that's a functional benefit with the z/OS Container Extensions.
3. There's absolutely no problem connecting MQ protocol clients directly to MQ for z/OS *and* via a "channel concentrator," so I suggest illustrating both paths in concurrent use. It's certainly not either/or.
Thanks.