IBM Cloud Global

 View Only

An overview of migrating MQ Queue Manager from on-premises to OpenShift

By Kevin Lefebvre posted Wed April 13, 2022 01:27 PM

  
- Kevin Lefebvre and Pam Andrejko

For existing customers who are ready to leverage the benefits of cloud computing and the IBM Cloud Pak platform, the process begins with migrating their existing services to OpenShift. This blog provides an overview of the steps that are required for IBM MQ Queue Manager, along with a few hints and tips along the way.

Considerations before you start
Before attempting the migration, you need to decide which authentication protocol (either certificate-based authentication or LDAP) that you want to use with your Queue Manager on OpenShift. The choice may be dictated by what is being used with your on-premises Queue Manager.

IMPORTANT: If your on-premises Queue Manager uses OS-based authentication (meaning a userid and password is used to authenticate with MQ), that option is not available in OpenShift and you will need to plan to move to certificate-based or LDAP authentication.

In a production system, queues should be drained (or backed up) before migration.


Process

The process involves the following key steps to be performed from your MQ on-premises environment and then in OpenShift. These steps are based on the IBM MQ Transformation Guide.


Graphical user interface, text, application, chat or text message

Description automatically generated

MQ on-premises 


Assuming you already have an existing MQ Queue manager running somewhere on-premises, from that terminal, run the dmpmqcfg command to export the environment configuration. The following command will export the configuration to the mqsc.out file (ensure you replace the value of <QUEUE_MANAGER_NAME> with the name of your on-premises queue manager): 
 

dmpmqcfg -m <QUEUE_MANAGER_NAME> -o mqsc > mqsc.out


If you examine the output, you will find the definitions and configurations of the System queues, channels, topics, subscriptions, clusters, etc., as well as the custom artifacts that you created. Ignore the system artifacts, and locate the custom artifacts you want to replicate in your OpenShift cluster. For example, as a proof-of-concept, you could start with a single queue and a channel. We'll use this information in the next section. 
 
OpenShift 
 
Before you can deploy an instance of Queue Manager in OpenShift, you need to create a ConfigMap and a Secret in the OpenShift project where you will deploy the Queue Manager. We created a project named cp4i
 
1. ConfigMap 


The ConfigMap contains the definitions of the custom queues, channels, topics, subscriptions, clusters elements you extracted from the dmpmqcfg command. For example, based on the on-premises configuration, the following ConfigMap defines a channel DEV1.SVRCONN, and a queue MYQUEUE.1:

kind: ConfigMap
apiVersion: v1
metadata:
  name: mqsc-cm
  namespace: cp4i
data:
  tls.setup: >-
    ALTER QMGR CHLAUTH (DISABLED)

    DEFINE CHANNEL('DEV1.SVRCONN') CHLTYPE(SVRCONN) TRPTYPE(TCP) SSLCIPH(TLS_RSA_WITH_AES_128_CBC_SHA256) SSLCAUTH(OPTIONAL) REPLACE 

    SET CHLAUTH(DEV1.SVRCONN) TYPE(BLOCKUSER) USERLIST('nobody') 

    DEFINE QLOCAL(MYQUEUE.1)

 

TIP:  

  • Channel names do not have to be in upper-case, but using upper-case makes it easier later when you create the route for the channel in OpenShift later on.  
  • Channel names must be unique across the cluster. If you have multiple Queue Manager instances across projects, but want to preserve the channel name across projects in some form, you could name the channels CH1-DEV.SVRCONN, CH1-PROD.SVRCONN, etc.
If you plan to use LDAP authentication, you also need to provide additional LDAP configuration in the ConfigMap. Here is an example of the same ConfigMap with the LDAP connection information included. You need to customize the LDAP parameters according to your LDAP configuration and replace the parameters in <brackets>):


kind: ConfigMap
apiVersion: v1
metadata:
  name: mqsc-cm
  namespace: cp4i
data:
  tls.setup: >-
    DEFINE CHANNEL('DEV1.SVRCONN') CHLTYPE(SVRCONN) TRPTYPE(TCP) SSLCIPH(TLS_RSA_WITH_AES_128_CBC_SHA256) SSLCAUTH(OPTIONAL) REPLACE

    DEFINE QLOCAL(MYQUEUE.1)

    DEFINE AUTHINFO('USE.LDAP') AUTHTYPE(IDPWLDAP) ADOPTCTX(YES) CONNAME('<LDAP_URL>(389)') CHCKCLNT(REQUIRED) CLASSGRP('groupOfUniqueNames') FINDGRP('uniqueMember') BASEDNG('ou=groups,dc=ibm,dc=com') BASEDNU('ou=people,dc=ibm,dc=com') LDAPUSER('cn=<LDAP_USER>,dc=ibm,dc=com') LDAPPWD('<LDAP_PASSWORD>') SHORTUSR('uid') GRPFIELD('cn') USRFIELD('uid') AUTHORMD(SEARCHGRP) REPLACE

    ALTER QMGR CONNAUTH(USE.LDAP)

    SET AUTHREC OBJTYPE(QMGR) GROUP('<LDAP_GROUP>') AUTHADD(ALL)

    SET AUTHREC PROFILE(MYQUEUE.1) OBJTYPE(QUEUE) GROUP('<LDAP_GROUP>') AUTHADD(ALL)

    REFRESH SECURITY

    REFRESH SECURITY TYPE(CONNAUTH)

TIP: If your on-premises Queue Manager includes settings in an INI file that you want to preserve in your OpenShift environment, (e.g., logging customization), you can optionally include those settings in a new ConfigMap as well. The values in this ConfigMap are only consumed when the Queue Manager is deployed and cannot be updated post-deployment. Here is an example of a ConfigMap with INI settings: 

kind: ConfigMap
apiVersion: v1
metadata:
  name: qmall-ini
  namespace: cp4i
data:
  qmall.ini: |-
    Channels:
      ChlauthEarlyAdopt=Yes
      ChlauthIgnoreUserCase=No
    ExitPath:
      ExitsDefaultPath=/var/mqm/exits
      ExitsDefaultPath64=/var/mqm/exits64
    SSL:
      AllowTLSV13=Yes
      MinimumRSAKeySize=1
      OCSPAuthentication=OPTIONAL

 
2.
Secret

You need to create a Secret in your OpenShift project to store the relevant TLS certificates. If your existing on-premises configuration/client application already use certificates for authentication, you can extract them and re-use the same certificates here. Otherwise, customers will need to use their trusted CA (or openssl) to generate: 
  • A personal key and certificate 
  • A signer certificate 

 
The following secret.yaml file contains a signer certificate chain (including the root.crt, intermediate1.crt, and intermediate2.crt) as well as the personal certificate and key:

kind: Secret
apiVersion: v1
metadata:
  name: mq-certs
  namespace: cp4i
data:
  root.crt: >-
    <paste-your-root-certificate>
  intermediate1.crt: >-
    <paste-your-intermediate-certificate>
  intermediate2.crt: >-
    <paste-your-intermediate2-certificate>
  personal.crt: >-
    <paste-your-personal-certificate>
  personal.key: >-
    <paste-your-personal-key>
type: Opaque

 
3. Deploy MQ Queue Manager on OpenShift

 
With the ConfigMap and Secret created, we are now ready to deploy Queue Manager in the OpenShift project. For reference, the detailed instructions can be found at:
Install the Cloud Pak for Integration Operator Catalog.

Click the + sign in the OpenShift web console and paste the following text:

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: ibm-operator-catalog
  namespace: openshift-marketplace
spec:
  displayName: IBM Operator Catalog
  image: 'icr.io/cpopen/ibm-operator-catalog:latest'
  publisher: IBM
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 45m

 

Then, from the OpenShift web console Administrator perspective, click Operators > OperatorHub, and search for CP4I. 
- Select the IBM Cloud Pak for Integration tile and click Install. 
- Select the IBM MQ tile and click Install. 
 

WARNING: Before proceeding to the next step, ensure that you have followed instructions to add a pull secret to the cp4i namespace.

 

4. Deploy an instance of MQ Queue Manager on OpenShift.

You have three options for deploying an instance of MQ Queue Manager: 

  1. OpenShift web console using OperatorHub: Traditional OpenShift process for deploying operands. See product documentation for instructions.
  2. IBM Cloud Pak for Integration Platform Navigator: More user-friendly way of deploying Cloud Pak services. See product documentation for instructions.
  3. OpenShift CLI with YAML files: A repeatable process for when you are ready to automate the deployment or have a large number of queues, topics, channels, etc. that you want to include in your Queue Manager.


TIP:
If you are using options 1 or 2 to deploy your Queue Manager, then you need to expand the Advanced configuration PKI and MQSC sections in the web console, or in the Platform Navigator UI toggle the Advanced settings, and provide the PKI secret (and certificates) and ConfigMap that we created above. 
 
For purposes of these instructions, we will use the third option, the OpenShift CLI. Here we provide a sample definition of a Queue Manager instance named mqdev1 based on the qm-certs Secret and mqsc-cm ConfigMap that we created above.  

If you are not using LDAP, you can apply the following YAML, after replacing <RWX_STORAGE_CLASS> with the name of the RWX File storage class from your OpenShift cluster:

apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
  name: mqdev1
  namespace: cp4i
spec:
  license:
    accept: true
    license: L-RJON-C7QG3S
    use: NonProduction
  queueManager:
    resources:
      limits:
        cpu: 500m
      requests:
        cpu: 500m
    storage:
      queueManager:
        type: ephemeral
      defaultClass: <RWX_STORAGE_CLASS>
    mqsc:
      - configMap:
          name: mqsc-cm
          items:
            - tls.setup
  template:
    pod:
      containers:
        - env:
            - name: MQSNOAUT
              value: 'yes'
          name: qmgr
  version: 9.2.4.0-r1
  web:
    enabled: true
  pki:
    keys:
      - name: personalcert
        secret:
          secretName: mq-certs
          items:
            - personal.crt
            - personal.key
    trust:
      - name: signers
        secret:
          secretName: mq-certs
          items:
            - intermediate1.crt
            - intermediate2.crt
            - root.crt
          

 

 

If you are using LDAP, you can apply the YAML below, after replacing <RWX-STORAGE-CLASS> with the name of the RWX File storage class from your OpenShift cluster.


TIP: When using LDAP authentication, there is one change that needs to be made to the QueueManager YAML. The variable MQSNOAUT gets removed entirely from the YAML because it is used to bypass security checking. It was added the sample profile to enable demo clusters or developers to quickly connect to the QMGR instance without configuring authentication. It should not be included in the YAML if LDAP is configured, and should never be present on production or even test clusters.


apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
  name: mqdev1
  namespace: cp4i
spec:
  license:
    accept: true
    license: L-RJON-C7QG3S
    use: NonProduction
  queueManager:
    name: mqdev1
    resources:
      limits:
        cpu: 500m
      requests:
        cpu: 500m
    storage:
      queueManager:
        type: ephemeral
      defaultClass: <RWX_STORAGE_CLASS>
    mqsc:
      - configMap:
          name: mqsc-cm
          items:
            - tls.setup
  template:
    pod:
      containers:
        - name: qmgr
  version: 9.2.5.0-r1
  web:
    enabled: true
  pki:
    keys:
      - name: personalcert
        secret:
          secretName: mq-certs
          items:
            - personal.key
            - personal.crt
    trust:
      - name: signers
        secret:
          secretName: mq-certs
          items:
            - intermediate1.crt
            - intermediate2.crt
            - root.crt

 

After applying the YAML, it takes a few minutes for the Queue Manager to start. You can verify that the pod started successfully by running the command: 

oc get po -n cp4i | grep mq

 

You should see something similar to: 

NAME                            READY  STATUS     RESTARTS  AGE 
mqdev1-ibm-mq-5c748c756-rvfbm   1/1    Running       4      4m


TIP: If you later decide you want to add other artifacts from your on-premises MQ server such as another queue or channel, you can simply edit the mqsc-cm ConfigMap and add those definitions. Then, for the changes to take effect, delete the MQ pod and the pod is restarted. Be aware however, if any messages are on the existing queue, they are lost when the pod is restarted.  
 
5. Create a new route 
 
In order to connect to your new Queue Manager on OpenShift from your MQ client, you need to create a new route in OpenShift. MQ uses Server Name Indication (SNI), an extension to the TLS protocol, that allows a client to indicate what service it requires. In IBM MQ terminology, this service equates to a channel. Therefore, you need to create a route in OpenShift for each MQ channel. When you deployed the Queue Manager, a route was automatically generated with the name mqdev1-ibm-mq-qm (see Network > Routes).  
 
You will use the hostname from that generated route to connect to the Queue Manager instance from your client, but you also need to create another route so OpenShift can route requests to the channel. 
 
The following YAML contains the definition of the new route mqdev1:  

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: mqdev1
  namespace: cp4i
  labels:
    app.kubernetes.io/component: integration
    app.kubernetes.io/instance: mq-ldap
    app.kubernetes.io/managed-by: operator
    app.kubernetes.io/name: ibm-mq
    app.kubernetes.io/version: 9.2.4.0
spec:
  host: dev12e-svrconn.chl.mq.ibm.com
  to:
    kind: Service
    name: mqdev1-ibm-mq
  port:
    targetPort: 1414
  tls:
    termination: passthrough
  wildcardPolicy: None
status:
  ingress:
    - host: dev12e-svrconn.chl.mq.ibm.com
      routerName: default
      wildcardPolicy: None

 

The key here is specifying the channel name for the host. Recall our channel was named  DEV1.SVRCONN. You need to convert the name of the channel to a format that MQ recognizes. In our case, that means converting it to lower case, replacing the dot with 2e- and appending the suffix .chl.mq.ibm.com , which results in dev12e-svrconn.chl.mq.ibm.com . Notice that we also specified port 1414 as the target port.  
 
Connect from your MQ Client 
 
Your Queue Manager instance is running and you have defined a route for your channel. The next step is to modify the connection on your MQ client application to use this instance instead.  You will need to provide the following parameters: 
 

- Queue Manager name: Provide the name of the queue manager you created in OpenShift. In our case it is mqdev1
- Hostname: Use the value of the Hostname from the automatically generated route. 
- Port: 443: Even though your channel is listening on port 1414, OpenShift routes all traffic through port 443 to the channel. You must connect to OpenShift through port 443. 
- Channel: Provide the name of the channel you want to address, in our case that would be DEV1.SVRCONN
- Queue name: We specified the value MYQUEUE.1 from our ConfigMap.  
- TLS certificates: Provide your public certificate and signer certificate. These certs are required when using certificate-based authentication and is used to encrypt traffic when the MQ client resides outside the OpenShift cluster. 
- (Optional) LDAP user: Only required when LDAP authentication is used. 
- (Optional) LDAP password: Only required when LDAP authentication is used. 

Conclusion  

These steps have been provided as an overview of the process for migrating your Queue Manager to OpenShift using Cloud Pak for Integration. They are meant to be used for getting started to prototype the process for your MQ migration.  

 


0 comments
30 views

Permalink