MQ

 View Only

Deep dive: Connecting to a Queue Manager running in OpenShift

By CALLUM JACKSON posted Mon February 15, 2021 10:01 AM

  

Many clients have been running IBM MQ within RedHat OpenShift for several years, one of the most frequent questions when a client starts is how to connect to the IBM MQ Queue Manager running within RedHat OpenShift from outside of the Kubernetes environment. New users are often concerned that it is not possible due to MQ running within a container, or simply not understanding how communication in to RedHat OpenShift occurs.

The first myth to dispel is that IBM MQ running in a container cannot connect to a MQ instance running on distributed, mainframe or the MQ appliance. At its core, the MQ certified container is MQ, and therefore able to connect to other instances of MQ across any platform.

The second myth is that communicating into OpenShift is not possible, or standard. This is not the case, OpenShift provides a mature framework to enable communication and new users are often unaware of its capabilities. The remainder of this document will focus on this communication and common ways to customized.

Prior to discussing the communication into OpenShift it is important to understand the common topology, and how each component interacts. There are commonly three types of machines in an OpenShift environment:

  • Cluster Masters (Control Plane): is a collection of machines that are responsible for the management of the OpenShift Container Platform Cluster. There are several base Kubernetes Services such as etcd, Kubenetes API Server and Kubernetes Control Manager that run on these machines. OpenShift complement these with additional cluster management capabilities such as the OpenShift OAuth API Server. No user workload is run or managed on these machines. For further information on the Control Plane please consult here.
  • Cluster Workers: these are the machines where MQ queue managers will run. These machines need to be licensed for OpenShift.
  • Cluster Infrastructure Nodes: this is a special optional sub-classification of Cluster Workers, where no user workload such as MQ Queue Managers are running. There are several components that OpenShift provides that are declared as Infrastructure Components, this includes the following in OpenShift 4.6:
    • Kubernetes and OpenShift Container Platform control plane services that run on masters
    • The default router
    • The container image registry
    • The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects
    • Cluster aggregated logging
    • Service brokers

Users can setup machines that only run the above components, these do not then require the same license as a standard Cluster Worker. For further information please consult here.

The following topology view assumes that Cluster Infrastructure Nodes have been configured, although this is not specifically required for connecting to MQ:

OpenShift provides recommendations on the preferred way to allow communication into an OpenShift Cluster. This is documented here. The preferred option is to use an Ingress Controller for communication that supports TLS with SNI. Server Name Indication (SNI) is an extension to TLS that allows clients to indicate what host it is wanting to connect to. IBM MQ can use TLS with SNI and therefore within the MQ Operator (which deploys the IBM MQ Certified Containers within an OpenShift environment) this is the approach taken. By default, OpenShift provides a runtime component called a Router that provides ingress. Therefore, communication from outside the OpenShift cluster to a container running within OpenShift will by default use the following path:

The above is showing a Client Application connecting and later in the document the considerations around Queue Manager to Queue Manager communication will be discussed.

Understanding the OpenShift Router component

The OpenShift Router is responsible for routing inbound connections to the IBM MQ container. While handling the routing for MQ it is also handling the routing for any other deployments that are being exposed from this OpenShift cluster, so that may be many different queue managers or other network traffic. Therefore, the router cannot simply route all traffic to the IBM MQ queue manager, it needs to complete this selectively, a routing key is required. IBM MQ traffic is seen by OpenShift as generic TCP/IP traffic. But there is no routing mechanism for simple TCP/IP that OpenShift can use to forward the traffic correctly. So, to route this it relies on the traffic being TLS encoded. This is because TLS provides a standard routing mechanism over and above TCP/IP called Server Name Indication (SNI), and this is what OpenShift routes are looking for. Let’s underline that – to route MQ connections into an OpenShift cluster using routes, those connections must be TLS encoded. If you want to learn more about TLS SNI headers please consult the specification here.

Let's zoom into the Client Application, Router and IBM MQ interaction to understand in more detail:

  1. The Client Application initiates a TLS connection and sends a SNI header with a defined value (shown as MQid1) within the TLS Client Hello step. This TLS connection is targeted at the Router endpoint.
  2. The Router endpoint receives the TLS connection and looks up the SNI header for its routing information. This header is resolved to the internal network address of the IBM MQ container. The TCP connection is then proxied onto the IBM MQ container.
  3. The IBM MQ container receives the connection and is exactly as if it originates from the client application (aside from the source IP address corresponding to the Router component).

Fortunately, IBM MQ added support for the SNI header as part of its SSL/TLS capability back in MQ V8, so SNI routing information already flows when the channel is using TLS. However, that was done for the purpose of being able to route within a single queue manager to a specific channel which may be using a different certificate. This is different to the typical OpenShift scenario where we want to use SNI to route to one of potentially many MQ queue managers. Bare this in mind as we look at the configuration needed below.

There are three configuration points:

  1. IBM MQ Container (Queue Manager): assure the IBM MQ container will accept connections with the SNI header
  2. Client Setup: assure the Client Application, or more precisely the associated MQ client, sends a SNI header with the value expected by the router.
  3. Router Configuration: configure the router so that the SNI header maps to the IBM MQ container

Let’s discuss each one of these. 

IBM MQ Container

The inbound channels of the MQ Queue Manager needs to be configured with TLS. Support for SNI was introduced in IBM MQ V8, this was available prior to the existence of OpenShift and allowed different certificates to be presented for individual channels.  Therefore, no additional setup is for SNI above enabling TLS. 

Client Setup

Technically, any unique string can be used as a way to route traffic using the SNI header, and MQ has different ways for different purposes. The application language and environment will determine what is set into the SNI header. This is due to some languages originally being enabled to support the multiple certificates within a single queue manger added in MQ V8, these require the encoding of the target channel name in the SNI header. Other languages follow the more standard SNI model of encoding the hostname of the queue manager into the header if provided.

Default Settings

SNI header set to the hostname (if provided) in the connection information:

  • .NET Clients in managed mode – all versions
  • AMQP Clients – all versions
  • XR Clients – all versions

Encoded value of the channel name

  • C Clients - V8 onwards
  • .NET Clients in unmanaged mode - V8 onwards
  • Java / JMS Clients (where the Java install supports javax.net.ssl.SNIHostName) – V9.1.1 onwards

For details on the encoding logic please consult the following.

Customizations possible

As you may have deduced, the hostname encoding is a better match for the OpenShift Route scenario rather than the channel encoding, it's more likely that the hostname is unique than the channel name. The channel name is also not the easiest thing to encode.  In MQ 9.2.1 support was provided to add the behaviour for C, .NET unmanaged and Java/JMS from encoding the channel name to the configured hostname.

This support is enabled by setting the OutboundSNI variable within the client configuration file to HOSTNAME. For instance:

#* Module Name: mqclient.ini                                     *#
#* Type       : IBM MQ MQI client configuration file             *#
#  Function   : Define the configuration of a client             *#
#*                                                               *#
#*****************************************************************#
#* Notes      :                                                  *#
#* 1) This file defines the configuration of a client            *#
#*                                                               *#
#*****************************************************************#
SSL:
  
OutboundSNI=HOSTNAME


The simplest option for the OpenShift topology is to setup the client to send  the hostname defined within the OpenShift Route configuration (discussed in the next section)  as the SNI header unless you are in a client environment where this is not possible or a queue manager actually requires different certificates for different channels.
 

Router Configuration

The IBM MQ operator (by default) configures a route for MQ traffic when a Queue Manager is deployed. This is deployed with the following resource name:

<QueueManagerResourceName>-<Namespace>-ibm-mq-qm

 This will automatically have a hostname assigned based on the OpenShift Cluster configuration. For instance my Route was called:

qm1-cp4i-ibm-mq-qm

The corresponding hostname is:

qm1-cp4i-ibm-mq-qm-cp4i.apps.callumj.icp4i.com

If the MQ client is configured to send the hostname, as the SNI header, simply assure that you specify this hostname in the connection details and no additional configuration is required.

Creating a custom Route where the SNI headers is a encoded channel names

If you are in the situation that the MQ client will send an encoded value of the channel name a new OpenShift Route will need to be defined. How to configure the OpenShift Route is documented within the Knowledge Center here. In the case that the encoded channel name value is used, the channel name will need to be unique across all MQ instances exposed through the OpenShift Router. 

Queue Manager to Queue Manager Communication

This blog has focused on client communication into OpenShift, another topology is where Queue Manager to Queue Manager communication needs to be established. In this scenario it follows the same principles as the client communication but the Queue Manager establishing the inbound (to OpenShift) TLS connection will send the encoded channel name by default (support was added in V8). In MQ V9.2.1 support was added to customize this behaviour using the queue manager configuration file. Like the client configuration this means setting the OutboundSNI variable to HOSTNAME. This is documented within the Knowledge Center here.

In the case of a cluster receiver channel the entry point to the Queue Manager is advertised within the CONNAME attribute. This represents the network location that any Queue Manager within the cluster can use for communication. Where the cluster extends outside of the OpenShift environment this CONNAME should be the hostname and port of the OpenShift Route providing access to the Queue Manager.


#Featured-area-2-home
#Featured-area-2

#MQ
#Openshift
#Queuemanager
#redhatopenshift
0 comments
450 views

Permalink