Cloud Pak for Integration

 View Only

Monitoring IBM MQ queue depth in Cloud Pak for Integration

By Matt Roberts posted Tue May 04, 2021 03:39 AM

  
To simplify operational monitoring and alerting on Red Hat OpenShift the IBM MQ Certified Container that is delivered in Cloud Pak for Integration emits a range of queue manager (server) scope metrics using a Prometheus interface that is consumed by the OpenShift monitoring tools.

Alongside metrics for the queue manager another monitoring best practice is to track metrics for individual queues and topics such as queue depth that give an indication of application health, however the MQ Certified Container does not currently offer an option to publish this finer grained data. It has always been possible to configure monitoring for IBM MQ queue and topic metrics in OpenShift with some manual effort, but thanks to recent changes we have now made it easier than ever – as you’ll see in this tutorial!

In this blog I will show how to deploy an MQ Prometheus monitor pod that connects to the queue manager in order to publish the additional metrics that are not exposed directly. Using a demonstration cluster running in OpenShift on IBM Cloud I will then show how to use the native IBM Cloud Monitoring service to integrate the new data points with your operational monitoring and alerting processes such as PagerDuty.

Although the example here uses IBM Cloud Monitoring, the same techniques can be used in any environment using the Prometheus interface to inject the metrics into your own preferred monitoring tool.

Let's get started!


Step 1: Build the MQ metrics package to create a Prometheus monitor pod for the queue manager

The IBM MQ Prometheus monitor is part of a the mq-metric-samples open-source repo hosted on github.com that enables us to quickly and easily build a Prometheus monitoring agent that connects to an IBM MQ queue manager and exposes a wide variety of statistics about that queue manager.

To get started, let’s download and compile the Prometheus monitor into a Docker container locally on our laptop.

# Create a directory to host the GitHub repo if you don’t have one already
cd my-workspace
mkdir ibm-messaging
cd ibm-messaging

# Clone the metrics repo onto your local machine
git clone git@github.com:ibm-messaging/mq-metric-samples.git

Now we can use the pre-supplied script to create a local Docker container that exposes a Prometheus endpoint which will present our MQ metrics in the standard format defined by Prometheus, which will enable the data to be easily consumed by a wide range of monitoring offerings.

cd mq-metric-samples/scripts
./buildRuntime.sh mq_prometheus

Building container mq-metric-samples-gobuild:5.2.0
...
Building mq_prometheus

Compiled programs should now be in /Users/myuser/tmp/mq-metric-samples/bin
...
=> exporting to image
=> => exporting layers        
=> => writing image sha256:1333ef...69d7c                    
=> => naming to docker.io/library/mq-metric-prometheus:5.2.0


We can see the built Docker container on the local machine as follows:

docker images 

REPOSITORY                TAG     IMAGE ID       CREATED         SIZE
mq-metric-prometheus      5.2.0   13332223dbe2   2 minutes ago   202MB

Optionally, if we have a queue manager that is accessible from the local laptop then we can launch the metrics agent Docker container to demonstrate that the container exposes the Prometheus endpoint as we expect:

# Modify the values in the environment variables to match your configuration
# Keep the CONFIGURATIONFILE variable set to the empty string
docker run --rm -p 9157:9157 \
              -e IBMMQ_CONNECTION_QUEUEMANAGER="QM1" \
              -e IBMMQ_CONNECTION_CONNNAME="myhost(1414)" \
              -e IBMMQ_CONNECTION_CHANNEL="SYSTEM.DEF.SVRCONN" \
              -e IBMMQ_CONNECTION_USER="myusername" \
              -e IBMMQ_CONNECTION_PASSWORD="mypassword" \
              -e IBMMQ_GLOBAL_CONFIGURATIONFILE="" \
              -it mq-metric-prometheus:5.2.0

IBM MQ metrics exporter for Prometheus monitoring
Build         : 20210410-173628
Commit Level  : 3dd2c0d
Build Platform: Darwin/

INFO[0000] Trying to connect as client using ConnName: myhost(1414), Channel: SYSTEM.DEF.SVRCONN
INFO[0004] Connected to queue manager  QM1            
INFO[0029] IBMMQ Describe started                     
INFO[0029] Platform is UNIX                           
INFO[0029] Listening on http address :9157

You can then connect to the local Prometheus endpoint using your browser at http://localhost:9157/metrics



Step 2: Deploy the Prometheus monitor to an OpenShift cluster and see it presenting queue depth data via the Prometheus endpoint

Now we will deploy the Prometheus monitor to a real OpenShift cluster – in this example running in IBM Cloud.

To start with, we have to push the Docker image from our local machine up to a container registry instance that can be accessed by the cluster, for which will use the IBM Cloud Container Registry:

ibmcloud login
ibmcloud cr login
ibmcloud cr region-set uk-south
ibmcloud cr namespace-add my-icr-namespace

docker tag mq-metric-prometheus:5.2.0 uk.icr.io/my-icr-namespace/mq-metric-prometheus:1.0
docker push uk.icr.io/my-icr-namespace/mq-metric-prometheus:1.0

ibmcloud cr image-list --restrict my-icr-namespace

Listing images...

Repository                                        Tag   Digest         Namespace          Created      Size
uk.icr.io/my-icr-namespace/mq-metric-prometheus   1.0   3e1111098ab2   my-icr-namespace   1 hour ago   76 MB

OK

Now we log in to the OpenShift cluster using the “oc” CLI tool, switch to the OpenShift project (Kubernetes namespace) where our queue manager is running, and if we haven’t done so already we add the configuration to allow that namespace to pull images from the Container Registry:

# Apply your specific login properties here to match your cluster
oc login --token=sha256~<yourtoken> --server=https://yourserver:yourport

# Switch to the namespace that contains your IBM MQ queue manager pod
oc project cp4i

# If needed, copy the ICR secret to this namespace so that it can pull the container image
# (insert your own namespace in place of “cp4i” if needed)
oc get secret all-icr-io -n default -o yaml | sed 's/default/cp4i/g' | oc create -n cp4i -f -


We will pass the configuration settings to the metrics monitor pod using a ConfigMap and Secret, so create those now modifying the Connection properties in the example below with the values necessary to address the queue manager, and any other customization you wish to carry out. In the particular the CONNNAME attribute must be set to the name of the “Service” object that is created for your queue manager which allows it to be accessed from inside the cluster.

The “objects” and “global” properties specified in the ConfigMap example below give the following useful behavior for our specific scenario:

  • The QUEUES attribute instructs the monitoring agent to look for any queues that don’t start with SYSTEM or AMQ (those being generally for internal use by the queue manager). You may wish to customize this further to meet your needs
  • The SUBSCRIPTIONS attribute excludes subscriptions owned by the queue manager itself
  • The TOPICS attribute disables metrics for all topics as we are going to look only at queues in this example
  • The settings for USEPUBLICATIONS and USEOBJECTSTATUS reduce the set of metrics that will be emitted by this monitor so that it avoids overlapping too much with the metrics that are already emitted automatically by the queue manager container
  • The CONFIGURATIONFILE attribute is set to empty string and instructs the container not to look for a configuration file, since we are applying our configuration using environment variables from the ConfigMap
  • LOGLEVEL is set to INFO – in some cases you might modify this to DEBUG if you wish to do detailed investigation of the statistics

oc create configmap metrics-configuration \
    --from-literal=IBMMQ_CONNECTION_QUEUEMANAGER='QM1' \
    --from-literal=IBMMQ_CONNECTION_CONNNAME='quickstart-cp4i-ibm-mq(1414)' \
    --from-literal=IBMMQ_CONNECTION_CHANNEL='SYSTEM.DEF.SVRCONN' \
    --from-literal=IBMMQ_OBJECTS_QUEUES='*,!SYSTEM.*,!AMQ.*' \
    --from-literal=IBMMQ_OBJECTS_SUBSCRIPTIONS='!$SYS*' \
    --from-literal=IBMMQ_OBJECTS_TOPICS='!*' \
    --from-literal=IBMMQ_GLOBAL_USEPUBLICATIONS=false \
    --from-literal=IBMMQ_GLOBAL_USEOBJECTSTATUS=true \
    --from-literal=IBMMQ_GLOBAL_CONFIGURATIONFILE='' \
--from-literal=IBMMQ_GLOBAL_LOGLEVEL=INFO

# Also create a Secret that defines the username and password to be used to access
# the queue manager. Leave these values empty is no credentials are required
oc create secret generic metrics-credentials \
   --from-literal=IBMMQ_CONNECTION_USER='' \
--from-literal=IBMMQ_CONNECTION_PASSWORD=''

It is important the settings you define above are suitable to allow the metrics pod to connect successfully to the queue manager. For example if you are using the default settings for an MQ v9.2 queue manager in Cloud Pak for Integration then applications are not able to connect using the SYSTEM.DEF.SVRCONN channel – you can relax that default restriction for development purposes using the following command against your queue manager pod. In other cases you may need to check your queue manager security configuration to establish how to grant access to your application.

oc exec quickstart-cp4i-ibm-mq-0 -- /bin/bash -c "echo 'SET CHLAUTH('SYSTEM.*') TYPE(ADDRESSMAP) ADDRESS(*) ACTION(REMOVE)' | runmqsc"

Now is also a good time to create any queues that you want to use in your queue manager. By default the metrics pod queries the list of queues at startup time, and then only once per hour after that (which can be modified using the IBMMQ_GLOBAL_REDISCOVERINTERVAL if desired), so if you create additional queues after deploying the metrics pod they will not show up in the metrics until you either restart the metrics pod or you wait for up to an hour!

We’re now ready to deploy the metrics pod using the sample OpenShift objects provided in the github repo:

cd mq-metric-samples/cp4i

# Create a new ServiceAccount that will ensure the metrics pod is
# deployed using the most secure Restricted SCC
oc apply -f sa-pod-deployer.yaml

# Update the spec.containers.image attribute in metrics-pod.yaml to match
# your container registry and image name
vi metrics-pod.yaml

# Deploy the metrics pod using the service account
oc apply -f ./metrics-pod.yaml --as=my-service-account

# Create a Service object that exposes the metrics pod so that it can
# be discovered by monitoring tools that are looking for Prometheus endpoints
#
# Note that the spec.selector.app matches the metadata.labels.app property
# defined in metrics-pod.yaml
oc apply -f ./metrics-service.yaml

If everything has gone to plan, we can now look at the logs of the metrics pod and see that it has started up successfully, and is being polled by the monitoring infrastructure roughly once a minute:

oc logs mq-metric-prometheus

IBM MQ metrics exporter for Prometheus monitoring
Build         : 20210410-173628
Commit Level  : 3dd2c0d
Build Platform: Darwin/

time="2021-04-13T20:12:52Z" level=info msg="Trying to connect as client using ConnName: quickstart-cp4i-ibm-mq(1414), Channel: SYSTEM.DEF.SVRCONN"
time="2021-04-13T20:12:52Z" level=info msg="Connected to queue manager  QM1"
time="2021-04-13T20:12:52Z" level=info msg="IBMMQ Describe started"
time="2021-04-13T20:12:52Z" level=info msg="Platform is UNIX"
time="2021-04-13T20:12:52Z" level=info msg="Listening on http address :9157"
time="2021-04-13T20:12:55Z" level=info msg="IBMMQ Collect started 14000001720300"
time="2021-04-13T20:12:55Z" level=info msg="Collection time = 0 secs"
time="2021-04-13T20:13:55Z" level=info msg="IBMMQ Collect started 14000003035700"
time="2021-04-13T20:13:55Z" level=info msg="Collection time = 0 secs"

Optionally, if you want to see the data being emitted by the metrics pods you can make your own call to the Prometheus endpoint by exec’ing into your queue manager pod and using curl to call the endpoint, for example:

oc exec quickstart-cp4i-ibm-mq-0 -- /bin/bash -c "curl mq-metric-prometheus-service:9157/metrics"

...
# HELP ibmmq_qmgr_channel_initiator_status Channel Initiator Status
# TYPE ibmmq_qmgr_channel_initiator_status gauge
ibmmq_qmgr_channel_initiator_status{platform="UNIX",qmgr="QM1"} 2
# HELP ibmmq_qmgr_command_server_status Command Server Status
# TYPE ibmmq_qmgr_command_server_status gauge
ibmmq_qmgr_command_server_status{platform="UNIX",qmgr="QM1"} 2
# HELP ibmmq_qmgr_connection_count Connection Count
# TYPE ibmmq_qmgr_connection_count gauge
ibmmq_qmgr_connection_count{platform="UNIX",qmgr="QM1"} 26


Debugging problems

If your metrics pod does not work as shown in the previous two snippets you will need to debug the cause of the failure, which is typically either due to problems connecting to the queue manager (such as incorrect hostname or port), or authorization errors when connecting or opening relevant queues (due to the MQ security settings).

You can generate additional information on failures by updating IBMMQ_GLOBAL_LOGLEVEL=DEBUG in the ConfigMap and then restarting the metrics pod for the change to take effect. This will cause a printout of all the configuration variables that are being read in at the start of the pod execution and also printing of MQ error codes such as “MQRC_NOT_AUTHORIZED [2035]” when failures occur.



Step 3: Configure the IBM Cloud Monitoring service to collect data from the OpenShift cluster

If you haven’t done so already we need to create an instance of the IBM Cloud Monitoring service which store the data from our OpenShift cluster. The Provisioning an instance page describes how to do this from the catalog UI or via the IBM Cloud CLI.

Once you have provisioned an instance you will be able to see it from the Monitoring tab of the IBM Cloud Observability dashboard as shown below:



Next, you will create a monitoring configuration that connects your clusters to the Monitoring instance that you created using the ob monitoring config create command.

Note that before you can successfully execute the command, you may require your IBM Cloud account administrator to grant you the “Minimum required permissions” that are described in that linked documentation page.

# Replace the cluster and instance parameters with the names for your objects
ibmcloud ob monitoring config create --cluster myclustername --instance "IBM Cloud Monitoring-mattr-test"

Creating configuration...
OK

With the monitoring configuration in place the metrics that are being emitted by the metrics pod will now start flowing automatically into your IBM Cloud Monitoring instance!



Step 4: Visualize the queue depth data in IBM Cloud Monitoring

To see your metrics in action you can now open your Monitoring instance by clicking on the “Open dashboard” link on the right-hand side of the Monitoring tab in the IBM Cloud Observability page.

Click on the “cpu.used.percent” field that is selected by default and type “queue” into the filter dialog, and you will be able to select the “ibmmq_queue_depth” attribute from the drop down list as shown here:


Then continue by configuring the graph settings as shown here:

  • Time: Maximum
  • Group: Maximum
  • Segment by: “queue” (type in this word)


Now if you put some messages onto your queues using your favorite application, you will see the respective queue depths appear in the Monitoring view before you very eyes!


Note that a number of factors affect how quickly the data will show up in the Monitoring dashboard:

  • Frequency with which the metrics pod queries the queue manager for new data (which is configurable using the IBMMQ_GLOBAL_POLLINTERVAL property)
  • Frequency with which the Prometheus infrastructure in the cluster polls the metrics endpoint (typically 1 minute)
  • Latent time to transmit the data to the IBM Cloud Monitoring infrastructure

Typically the data arrives in the Monitoring service within a minute or two, but you may need to investigate tuning the relevant parameters if you wish to increase the speed with which data is delivered.



Step 5: Configure an alert to automatically notify your Operations team via PagerDuty if it exceeds the expected level

As a busy Ops/SRE practitioner I don’t want to wait around watching a UI all day and night to find out if my application is misbehaving, so we can now use the built-in functions of IBM Cloud Monitoring to create a push alert if our monitoring detects that a queue is filling up beyond what would typically be expected.

In fact, it’s very easy to set up a wide range of alert types using IBM Cloud Monitoring by switching over to the Alerts tab as shown below, where in a matter of seconds we can set up an alert to issue a PagerDuty notification if the depth of the MARKETING queue goes over 50 messages for a period of 2 minutes as shown below:



Summary

In this tutorial you have seen how in a matter of minutes you can use the sample IBM MQ Prometheus metrics package to:

  • build and deploy a new container into an OpenShift cluster running in IBM Cloud that monitors the depth of queues on an IBM MQ queue manager and
  • reports that information into the IBM Cloud Monitoring infrastructure, from where you can use powerful alerting mechanisms to notify your Ops team if the system is experiencing problems.

The exact same techniques can also be used with OpenShift clusters running in any type of environment – simply use the MQ Prometheus metrics package to emit your metrics into your own monitoring infrastructure via the Prometheus interface.

Happy monitoring of IBM MQ in Cloud Pak for Integration!



Matt Roberts
STSM and Lead Architect, IBM Cloud Pak for Integration


#cloudpakforintegration #mq #cp4i #bestpractice



#bestpractices
#CP4I
#Featured-area-2
#Featured-area-2-home
#IBMCloudPakforIntegration(ICP4I)
#IBMMQ
#MQ
17 comments
301 views

Permalink

Comments

14 days ago

@Othmane JABRI I wrote a blog which does just that, it installs the mq-metric-sample on the base MQ image from IBM, the metrics are then scraped by Prometheus, have a look :) https://medium.com/@abudavis/installing-metrics-program-on-ibm-mq-image-d29fdb589c12

Fri August 02, 2024 12:08 PM

Hi @Othmane JABRI - the way this sample is set up it uses a separate monitoring pod for each queue manager.

If you're looking to monitor large numbers of queue managers in a single cluster all at once then something like Instana may be the right approach for that type of scenario (which does not use the monitor pod approach described here).

Regards, Matt.

Fri August 02, 2024 08:28 AM

Hi @Matt Roberts, How target all Queue Manager Pods deployed on OCP cluster with just one Prometheus metrics pods? 

Mon April 04, 2022 05:01 AM

I finally decided to move away from IBM Monitoring console and forward to Grafana instead, works like charm. Is there a possibility to modify the metrics docker image to connect to QMGR over TLS?

Wed March 30, 2022 08:47 AM

I am trying it get the Sysdig working via IBM Cloud support.
I was wondering what work needs to be done in order to be able to connect to the QMGRs over TLS?

Tue March 29, 2022 06:29 AM

Hi @Abu Davis I would suggest the next step is to ask the IBM Cloud team for help debugging the consumption of these custom metrics. There are details of the various support plans available here which determines the appropriate path to raise the question.

Regards, Matt​

Tue March 29, 2022 06:07 AM

I did try to edit the Sysdig configmap to include the metrics, but that didn't seem to have any effect. Not sure where to look any further. There is however a lot of prometheus metrics being picked up from other pods.

Mon March 28, 2022 08:15 AM

Hi @Abu Davis, I've not configured this monitoring behaviour in a Kubernetes cluster personally (I was using an OpenShift cluster).

If you've followed the steps for Monitoring a Kubernetes cluster and are seeing some data items come through then the next place I would suggest looking is the configuration for Including and Excluding metrics, to confirm that the MQ metrics are included in the list that the agent should be collecting.

Regards, Matt​

Mon March 28, 2022 06:21 AM

My use case is to monitor queue depths of QMGRs on IBM Cloud (IBM MQ Service), so I have an IBM Kubernetes cluster that is now showing up on the FREE tier of IBM Cloud Monitoring console. The metrics pod is deployed in the "default" namespace.

On the IBM Cloud Monitoring console, under Explore > Containerized Apps > Prometheus, I am not able to see any ibmmq* related fields. All I see is the built in prometheus Sysdig queries. What could I be doing wrong, any ideas?

When I check the metrics pod log, it seems to be working fine: 
----

time="2022-03-28T09:36:06Z" level=info msg="Connected to queue manager  XXXXX"

time="2022-03-28T09:36:08Z" level=info msg="IBMMQ Describe started"

time="2022-03-28T09:36:08Z" level=info msg="Platform is UNIX"

time="2022-03-28T09:36:08Z" level=info msg="Listening on http address :9157"

time="2022-03-28T09:57:27Z" level=info msg="IBMMQ Collect started 14000007565640"

time="2022-03-28T09:57:28Z" level=info msg="Collection time = 1 secs"


----

When accessing the created k8s service object via curl, it seems to be emitting data as it should and also shows the queues on my QMGR.
----
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 7
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.13.15"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.456464e+06

----

Mon March 28, 2022 04:37 AM

Hi @Abu Davis - for (2), yes - the technique above only works with IBM Cloud Monitoring for clusters deployed on IBM Cloud - i.e. Red Hat OpenShift on IBM Cloud and (I expect) the IBM Kubernetes Service. For other types of cluster you there are typically equivalent alternatives - for example the embedded Logging & Monitoring capabilities of OpenShift (if deployed in your cluster), or other cloud provider monitoring options such as Cloudwatch in AWS.

For (1), I would be surprised if this doesn't work with the IBM Monitoring free tier, unless there is potentially a limit on the number of endpoints that can be consumed or something similar.​​

Mon March 28, 2022 04:27 AM

Thank you for the answer.

1) I am running the "FREE" version of the IBM Cloud Monitoring instance, I do not see a "Sources" column in the listed instance as yours. Could this be the issue? 


2) Also, this only works if your k8s/Openshift cluster is deployed on IBM Cloud, correct?

Mon March 28, 2022 04:16 AM

Hi @Abu Davis - when you deploy the Service object for the metrics pod using "oc apply -f ./metrics-service.yaml" there is a special annotation here in metrics-service.yaml that advertises the Prometheus endpoint for this service, which is picked up automatically by the IBM Monitoring service.

That technique works for any Service that is deployed to the cluster - you don't have to do any specific configuration in the IBM Monitoring service itself as it is already set up to look for that annotation.

Regards, Matt​

Sat March 26, 2022 07:40 AM

After you deploy the IBM Cloud Monitoring instance, where do you configure/specify the prometheus metrics URL on it?

Mon May 17, 2021 03:58 AM

Hi @Abu Davis - thanks for your questions, I'm glad you found the article interesting!​
To answer your queries;
  1. The way the metrics pod works you will need a separate deployment per queue manager that you want to monitor - so if you have two queue managers then deploy two instances of the metrics pod, each with its own ConfigMap/Secret to configure the settings. I did think a good next step would be to have a single metrics instance able to present metrics for multiple queue managers but that isn't something that I've looked at trying to implement - the ideal solution would be to include these additional metrics directly out of the queue manager container at some point in future.
  2. I haven't personally done forwarding of metrics from Prometheus to Grafana but these two articles on Prometheus metrics in Grafana cloud and Installing Prometheus operator with Grafana cloud look to illustrate the pattern you are looking for, using the "remote_write" option to push from Prometheus into Grafana

Best regards,

Matt

Sat May 15, 2021 03:46 PM

Very interesting article, something we've been keen on doing for MQ on CP4I, have two questions:
1) If I have a couple of qmgrs on CP4I, do I need to create multiple configmaps to extract metrics? 
2) Is it possible to forward metrics from Prometheus to Grafana instance (by installing Grafana Operator) on Openshift? The idea is to create a dashboard on Grafana to monitor MQ queue depth alerts.

Tue May 04, 2021 01:46 PM

Hi @Morag Hughson - this is using the existing/standard MQ metrics interface to retrieve data, there is no change on the queue manager side.

Specifically the mq-metric-sample package is used to build a container image that contains an MQ client application (written in Golang) that subscribes to the queue manager statistic topics, and presents that information on a Prometheus style HTTP endpoint.

Regards, Matt​​

Tue May 04, 2021 08:02 AM

Is this just polling the command server with a DISPLAY QUEUE/InquireQueue (PCF) command or have you got some other new interface to obtain these numbers?