IBM App Connect Enterprise to IBM MQ Connectivity

 View Only
Wed July 29, 2020 12:21 PM

Published on July 24, 2020

Practical example - JSON Format Client Channel Definition Table (CCDT) connecting to a Queue Manager Group

By Aiden Gallagher, Natasha Kirkup & Sachin Tanna

Introduction

When integrating IBM App Connect Enterprise (ACE) and IBM MQ it is becoming more likely that each will be deployed to separate servers or containers. With the release of IBM ACE 11.0.0.7, EDA (Event Driven Architecture) nodes can be managed (mostly) by a remote queue manager.

IBM ACE can make use of a Client Channel Definition Table (CCDT) to understand which of multiple IBM MQ Queue Managers it should connect to in order to implement a message flow involving a remote queue manager. Recent changes mean this can now be written as a JSON file which is accessed via a URL or uploaded onto the ACE Server.

CCDTs can be used to describe simple failover functionality, where ACE will connect to a different queue manager when its current connection fails. A CCDT on its own is not used for automatically distributing client traffic into an MQ Estate.

In this article we will show you how to connect ACE to multiple MQ Queue Managers that are part of a Queue Manager Group using a JSON format CCDT.

Audience

It is expected that the reader of this article has prior knowledge of ACE and MQ installation and knowledge of connecting ACE to MQ and the practices involved in completing this. If using containers, then an understanding of container technology and how ACE works in containers is beneficial.

ACE and MQ containers can be found on the IBM GitHub: https://github.com/ot4i

Why use a CCDT?

IBM MQ client’s applications – in this example ACE - use the CCDT to determine the channel definitions and authentication information to connect to a queue manager or queue manager group. This means a list of connections can be tried sequentially with the connection configuration detailed up front including security aspects such as cipher specifications.

This means new connections can be defined but also that the ‘weighting’ of which connection and whether to prefer to retain a specific connection can be applied through the ‘clientWeight’ and ‘affinity’ parameters.

Can I use a Uniform Cluster?

From IBM MQv9.1.2 it’s possible to define a Uniform Cluster – which is a group of exact replica Queue Managers that allow for scaling of Queue Managers (QMs) and automatic aligning of available QMs by applications. An added benefit is that only a single Uniform cluster QM needs connecting to by the application before all other possible connections are automatically created for the application which provides a better use of the available QMs.

However, there are some scenarios where using a Uniform cluster does not fit the requirements of a deployment:

  • If an ACE application needs to connect to multiple queue managers not in the same uniform cluster.
  • The uniformity of the QMs in the cluster is excessive to required needs because some QMs might interlock multiple purposes.
For many reasons like these there it is still relevant to make use of a CCDT to manage connections.

CCDT in JSON Format

From IBM MQv9.1.2 it’s possible to define a CCDT in the JSON format and to edit it in a text editor, this provides flexibility to adapt a CCDT outside of the MQ updates after an object alteration. The CCDT can still be imported to the application where it be used, alternatively MQ can use a URL for updating the CCDT and for applications to use a CCDT reference.

In our example we will create a CCDT in a text editor, store it in the application (ACE Node) directory and locate it in the application reference configuration (ACE node.conf.yaml).

Queue Manager Group

The use of Queue Manager groups is a logical concept that’s used with the CCDTs to define a set of Queue Managers for ACE to connect to. This might be to:

  • Improve availability of queues across multiple connected QMs
  • Generally, connect to the same QM but have an available alternative if not possible
  • Have the ability to move/alter a QM without affecting the client
  • Allow a client connection without setting specific QM names
In this example we will use a Queue Manager Group.

Practical Example

Architectural Overview

In this example we will connect two virtual servers with ACE installed to two virtual servers with MQ installed. Note: The example can also be completed with containers but for containers running on version 9.1.4.0 or above there is a dependency on using LDAP which is not covered in this example. 

Figure 1 - High Level Architecture

Figure 1 - High Level Architecture shows:
- Two ACE Integration Nodes called ACEN01 and ACEN02
- Two MQ Queue Manager’s called: QM0A and QM0B
- The two QMs in a logical group called ACEMQGRP

The next steps will show you what should already be built before following this example followed by a series of setup steps in both MQ and ACE.

Prerequisites

  • Installed ACE on two independent virtual Linux servers.
  • Created two ACE Integration Nodes called ACEN01 and ACEN02 (one per virtual server) each with an integration server, if using independent integration servers, you will need to have two, one per container
  • Installed MQ on two independent virtual Linux Servers.
  • Created a Queue Manager on each virtual Linux server, QM0A and QM0B.
  • Access to an ACE Toolkit
 Table 1 - Servers and Application Objects

  • Ran validation tests to ensure the ACE and MQ system is working correctly
  • The following users/groups setup on each server:
 Table 2 - User and Groups

Setup and Configuration Procedure
Setup MQ Objects

 
Figure 2 - MQ Overview

1. Create Queues Q1.IN and Q1.OUT on each Queue Manager. We will use ACE to pick up a message on Q1.IN and put it on Q1.OUT
  • Login to each Queue Manager server as mqm
  • Run the following commands:
    echo “define qlocal(‘QM.IN’)” | runmqsc ${QMNAME}
    echo “define qlocal(‘QM.OUT’)” | runmqsc ${QMNAME}
  • Check the queues exist by running the following command:
    echo “display qlocal(‘QM*’)” | runmqsc ${QMNAME}
2. Create two server connection channels for each Queue Manager to connect to each ACE server. Note: A Client Connection channel is not required as the exported JSON CCDT provides all the information required for ACE to connect to MQ with the server connection channel.
  • Login to each Queue Manager server as mqm
  • Run the following command to create a server connection:
    echo “define channel(${CHLNAME}) chltype(SVRCONN) trptype(TCP)” | runmqsc ${QMNAME}
 Table 3 - Server Connection Details

3. Perform authority permissions on the queue and QM
  • Login to each Queue Manager server as mqm.
  • Create the local user “testuser” on the MQ OS, to allow for IDPWOS authentication inside MQ.
  • Run the following commands:
    echo “set authrec objtype(qmgr) principal(‘testuser’) authadd(all) | runmqsc ${QMNAME}
    echo "set authrec profile(Q1.IN) objtype(QUEUE) principal('testuser') authadd (get, put, inq, passall, browse)" | runmqsc ${QMNAME}
    echo "set authrec profile(Q1.OUT) objtype(QUEUE) principal('testuser') authadd (get, put, inq, passall, browse)" | runmqsc ${QMNAME}
  • Refresh security of the QM by running the following command:
    echo “refresh security(*)” | runmqsc ${QMNAME}

Install MQ Client on the ACE Servers

1. Copy the MQ installation files from the downloaded MQ Binaries into the ‘/tmp’ directory
2. Login to each ace server as aceadmin and run ‘cd /tmp’
3. Accept the licence as root/sudo: ./mqlicense.sh -accept
4. Install the relevant RPMs, for example on RHEL this will be as follows: rpm -ivh MQSeriesRuntime-Z.Z.Z.Z.rpm MQSeriesClient-Z.Z.Z.Z.rpm where Z.Z.Z.Z is the version of MQ installed
5. Set the MQ Client to be the primary installation: /opt/mqm/bin/setmqinst -i -p /opt/mqm/

Create an MQ Policy for ACE

1. Load the ACE Toolkit.
2. Select ‘File’ -> ‘New’ -> ‘Policy Project’
3. Name it ‘ACEMQGRPPolicy’ and select ‘Finish’
4. In the ‘Application Development’ view, right click the Project ‘ACEMQGRPPolicy’, select ‘New’ -> ‘Policy’
5. Name it ‘ACETOMQ’ and select ‘Finish’
6. Fill out the Policy details with the following:

 

Table 5 - Property Values

Note: Leave the existing default properties as they are

7. Check that the properties file looks like this:

 


8. Save the file.

Create a CCDT & Upload to ACE

1. Create an empty JSON file called ‘ccdt.json’
2. Edit the ccdt.json to include “{ “channel”: []}”
3. Add the following channel connection JSON into the channel array to connect and send messages to each of the Queue Manager’s in a round robin method.
Note: Examples can be found on the knowledge centre - https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.1.0/com.ibm.mq.con.doc/q132900_.htm


Figure 4 - CCDT Channel JSON
Table 6 - CCDT JSON Connection Array Details

*Note: ACE will select equally weighted channels in alphabetical order
This will connect ACE01 to QM0A and ACE02 to QM0B, however if one QM fails, both ace servers will connect to the remaining QM and will not reset to the preferred QM until restarted.

4. Save the ccdt.json file
5. Upload the ccdt.json file into the following directories on each of the ACE servers:
Table 7 - CCDT Locations

6. Edit the node.conf.yaml on each of the ACE server as the aceadmin user
  • Login to the server as the aceadminuser
  • Run ‘vi /var/mqsi/components/${NODENAME}/overrides/node.conf.yaml’
  • Add the following Stanza ‘BrokerRegistry’: ‘mqCCDT’ with the location of the ccdt file as shown in step 5.
7. Restart the ACE integration node
  • Login to the server as the aceadminuser
  • Run ‘. /opt/IBM/ace-Z.Z.Z/server/bin/mqsiprofile’ Where Z.Z.Z is the version of ACE
  • Stop the ACE integration node ‘mqsistop ${NODENAME}’
  • Start the ACE integration node ‘mqsistart ${NODENAME}’

Create an ACE Flow

1. Load the ACE Toolkit
2. Select ‘File’ -> ‘New’ -> ‘Application’
3. Name it ‘CCDTExample’ and select ‘Finish’
4. In the ‘Application Development’ view, right click the Application ‘CCDTExample’, select ‘New’ -> ‘Message Flow’
5. Name the Message flow name ‘MQtoMQ’ and select ‘Finish’
6. Open the MQtoMQ.msgflow in the message flow editor
7. Drag and drop an ‘MQInput Node’ on the flow editor and give it the following properties:

Table 8 - MQ Input Properties

8. Drag and drop an ‘MQOutput Node’ on the flow editor and give it the following properties:
Table 9 - MQ Output Properties

9. Connect the MQ Input ‘Out’ terminal to the MQ Output ‘In’ terminal
10. Save the message flow

Set user credentials in ACE

1. Login to each ace server as aceadmin
2. Run ‘. /opt/IBM/ace-Z.Z.Z/server/bin/mqsiprofile’ Where Z.Z.Z is the version of ACE
3. Define the security ID by running the following command: ‘mqsisetdbparms ${NODENAME} -n mq::MQUser -u testuser -p testpassword’

Build and Deploy the BAR file

1. Load the ACE Toolkit with the MQ Endpoint Policy and CCDTExample Application on it
2. Select ‘File’ -> ‘New’ -> ‘BAR File’
3. Name the bar file ‘ACETOMQ’ and select ‘Finish’
4. Open the ACETOMQ.bar file in the BAR File editor
5. In the Prepare Pane, select the ‘ACEMQGRPPolicy’ Policy and the ‘CCDTExample’ and select ‘Build and Save…’
6. Login to each of the nodes webGUI and select the server ACES01
7. Select ‘Deploy’ and ‘Add a BAR file’, select ACETOMQ.bar file and select ‘choose’
8. Select ‘Deploy’

Testing the ACE to MQ Connection

To test our setup, we want to ensure that both ACE integration nodes are able to GET messages from Q1.IN and PUT to Q1.OUT on both Queue Managers. To do this we will put a message on each QMs Q1.IN and then inspect the Q1.OUT.

To test both ACE integration node’s we first try with only a single integration node active and them switch the active integration node and repeat the test.


Figure 5 - Test Data Flow Diagram

Test Procedure for ACEN01

1. Stop the ACEN02 integration node.

  • Login to ACEN02 server as aceadmin
  • Run ‘. /opt/IBM/ace-Z.Z.Z/server/bin/mqsiprofile’ Where Z.Z.Z is the version of ACE
  • Stop the integration node ‘mqsistop ACEN02’
2. Check the ACEN01 integration is started.

  • Login to ACEN01 server as aceadmin
  • Run ‘. /opt/IBM/ace-Z.Z.Z/server/bin/mqsiprofile’ Where Z.Z.Z is the version of ACE
  • Start the integration node ‘mqsistart ACEN01’
3. Test message move from Q1.IN to Q1.OUT on QMOA.

  • Login to QM0A server as testuser
  • Run ‘/opt/mqm/samp/bin/amqsput Q1.IN QM0A’
  • Type the message ‘Hello World’ and press Enter twice
  • Run ‘/opt/mqm/samp/bin/amqsget Q1.OUT QM0A’
  • Expect to see the message ‘Hello World’ returned
4. Repeat Step 3 for QM0B.

Test Procedure for ACEN02

1. Stop the ACEN01 integration node.

  • Login to ACEN01 server as aceadmin
  • Run ‘. /opt/IBM/ace-Z.Z.Z/server/bin/mqsiprofile’ Where Z.Z.Z is the version of ACE
  • Stop the integration node ‘mqsistop ACEN01’
2. Check the ACEN02 integration is started.

  • Login to ACEN02 server as aceadmin
  • Run ‘. /opt/IBM/ace-Z.Z.Z/server/bin/mqsiprofile’ Where Z.Z.Z is the version of ACE
  • Start the integration node ‘mqsistart ACEN02’
3. Test message move from Q1.IN to Q1.OUT on QMOA.

  • Login to QM0A server as testuser
  • Run ‘/opt/mqm/samp/bin/amqsput Q1.IN QM0A’
  • Type the message ‘Hello World’ and press Enter twice
  • Run ‘/opt/mqm/samp/bin/amqsget Q1.OUT QM0A’
  • Expect to see the message ‘Hello World’ returned
4. Repeat Step 3 for QM0B.

Debugging ACE to MQ Connection

If the tests do not work, there are several key places to look for issues.

MQ Standard Logs

On the servers of each of the Queue Manager’s check the following logs:
/var/mqm/errors
/var/mqm/qmgrs/${QMNAME}/errors
std out of the containerised versions of ACE and MQ

Some common errors:
Table 10 - Common MQ Errors

ACE Standard Logs

On the servers of each of the ACE integration nodes check the deployment passed correctly by checking the following log: /var/log/messages

ACE Flow Trace

If both deployments are correct and there are no errors in the MQ or ACE Standard Logs then you should run a user trace of the flow. Perform the following steps:

1. Turn on the user trace on the failing node:

  • Login to the nodes’ webGUI and select the server ACES01
  • Above the ‘Deploy’ button is an ‘options’ icon, select this icon and a drop down will appear
  • Select ‘Reset User Trace’ and then ‘Start User Trace’
2. Put a message to the failing queue manager’s queue

  • Login to QM0A server as testuser
  • Run ‘/opt/mqm/samp/bin/amqsput Q1.IN QM0A’
3. Stop the user trace

  • Switch to the node WebGUI in the ACES01 view
  • Above the ‘Deploy’ button is an ‘options’ icon, select this icon and a drop down will appear
  • Select ‘Stop User Trace’
4. Read the user trace logs on the failing node:

  • Login to node as aceadmin
  • Check the following logs: /var/mqsi/common/log/ACEN01.ACES01.userTrace.0.txt
  • Identify any errors and action any fixes

Conclusion

After following this article, you will be able to connect two ACE integration nodes to two MQ Queue Managers all hosted on different machines. The Queue Managers will be part of a Queue Manager Group whose connections are defined in a JSON format CCDT. The CCDT will be available on the ace integration node which will referenced by an ACE BAR file in the form of an application and an MQ Policy.

Reference

New Features in ACEv11:
Explore the new features in App Connect Enterprise version 11.0.0.7
Configuring a JSON format CCDT
Queue manager groups in the CCDT
Limitations and Considerations for Uniform Clusters
WebSphere MQ v7.0 Features and Enhancements (Sections 5.7)
#AppConnectEnterprise(ACE)
#ACEV11
#IBMMQ
#connectivity

Comments

Fri October 06, 2023 04:17 AM

Hi Abhishek, 

I would suggest you raise an IBM Support Case to look at what specifically is happening in your scenario and whether that is the expected behaviour of the product. Especially given that the request is urgent. 

Thu October 05, 2023 03:06 PM

Hello,

We are facing a major issue with MQInput Node which is used to read messages from remote MQ and transfer to BAW.

Scenario:
1. We have an IBM MQ Queue which is holding lot of messages which are piled up and expected to be read from IBM AppConnect.
2. We are deploying IBM ACE flow using Jenkins Pipeline over RHEL OCP 4.10 CP4I container environment.

Problem Statement:
As soon as Jenkins job is triggered to deploy IBM ACE project to OCP environment, it establishes MQ connection while the pod state shows as Running but not in 1/1 Ready State instead it is still coming up.

During this time when the pod is still not READY, premature ACE flow gets triggered which in turn results in losing MQ messages as ACE flow is still not ready to process the messages further.

Once the pod is in READY state, rest of MQ messages are processed successfully by ACE flow pod.

To conclude, how to fix this issue of premature ACE flow execution even before ACE pod moves into READY state.

Do we have any IdleTimeOut property which can be used over MQInputNode to delay processing of MQ messages until POD state is READY.

This is a business blocker issue reported for one of the critical insurance application to replace legacy IBM SCA module to IBM ACE.

We need urgent resolution for the issue.

Thanks.

Mon September 07, 2020 04:13 PM

Thanks Aiden for this article. We are going to use it to migrate our current IBM Integration Bus version 9 to IBM ACE version 11.0.0.9

We have only one question:
In the same way you made available the location of the ccdt file, how can we set the MQEndpoint policy to be the default policy for all nodes that need an MQ connection?

Thanks.