MQ

MQ

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Certified container scaling part 2: Scaling up and down 

Wed March 04, 2020 12:24 PM

RobParker
Published on 25/06/2019 / Updated on 25/06/2019

In the previous blog post, of this 2 part series, we looked at how you can create a group of MQ certified containers which you can spread your messaging load across. In this blog post we will be looking at how you can modify the number of queue managers in that group, with minimum disruption, to increase or decrease your message throughput capabilities.

Preface

This blog post is a continuation of the previous blog post. As such we will be building on the already deployed group of 3 queue managers that were created in part 1. As a reminder, we have deployed 3 IBM MQ Advanced for Developers queue managers in IBM Cloud Private with the following parameters:

  • Deployed using the IBM MQ Developer certified container with a release name of “mq-scale-#” where # was a number 1-3
  • Each deployment has a single kubernetes NodePort service providing direct access to them. This service is used by our consumer applications, to ensure we have a consumer application connected to each queue manager.
  • Has the additional kubernetes label mqscalinggroup=MQSCALEGROUP1

We also created a kubernetes service called “ibmmq-scale-demo“. This service will send connections to any queue managers that have the label “mqscalinggroup=MQSCALEGROUP1“. We use this service with our producer application to send messages to all of the queue managers in the group.

Scaling up

To scale up we first need to create a new deployment of the MQ certified container with a matching additional label. Next we need to attach a new consumer application. The ibmmq-scale-demo service will automatically pick up and send connections to this queue manager as it has the required label.

Deploying the new queue manager

Use the following command to deploy the new MQ certified container:

helm install --tls --name mq-scale-4 --set license=accept --set service.type=NodePort --set metadata.labels.mqscalinggroup=MQSCALEGROUP1 ibm-charts/ibm-mqadvanced-server-dev

You can check the status of the new deployment by executing the following kubectl command:

kubectl get pods -w

When the Ready column shows the new pod as being ready, the kubernetes service will begin forwarding connections to it. You can verify this by connecting a consumer application to the queue manager directly, using the service deployed with the MQ certified container. To find out the connection details of the queue manager you can run the following command:

kubectl describe service mq-scale-4-ibm-mq

In the above output you can see that for the endpoint port 1414 the service has bound to port 31464

Connecting a new consumer application

Now that the queue manager is running and ready, it will have had messages placed on it’s queue. (Assuming that the test applications from before are still running. If they are not then you should restart them all). Next we must connect a consumer application, to verify that the new deployment is receiving messages. Run the following command replacing -Port- with your port number and -Cluster address- with your IBM Cloud Private Cluster IP address or hostname.

docker run -e MQSERVER="DEV.APP.SVRCONN/TCP/-Cluster address-(-Port-)" mqtestapp -get -noterm

At this point we have now succesfully scaled up our deployment from a group of 3 queue managers to a group of 4 queue managers. 

This example does require you to manually connect your consumer application and mange their connections. This adds additional planning and overhead to ensure all messages are processed. A solution to this is the Uniform cluster feature which was added in version 9.1.2 of IBM MQ. This allows you to configure your queue managers and applicatons in a way that means MQ will automatically rebalance applications. 

In the next section we will look at how you can remove a queue manager from your group to scale down.

Scaling down

Before we can scale down our deployments, we need to identify one of the queue managers to scale down and prevent new connections from being sent to it. Because we don’t want to lose message data, we should also ensure that queue manager has a consumer connected to it. In this example i will scale down by removing the mq-scale-4 deployment.

Once we are sure that a consumer is attached to the queue manager, we need to tell kubernetes to stop sending connections to it. The way we do this is by marking it as “not Ready”. This can be achieved by stopping the listener on the queue manager. Run the following command:

kubectl exec mq-scale-4-ibm-mq-0 endmqlsr

Once the listener is stopped, the queue manager will no longer accept any new connections. The ready check will also detect the queue manager is no longer able to accept connections and mark the pod as “not ready”. You can verify this by running the following command and checking the Ready column shows mq-scale-4-ibm-mq-0 as not ready:

kubectl get pods

Now that no new connections can be made, we need to stop any existing connections from placing messages on the queues. There are a couple of ways we can do this, each causing a different error to be returned to the client.

Option 1: Stop the producer connections

One option is to follow this document in order to identify the connections that we want to disconnect and then forcibly disconnecting them. In doing this the application will receive a return code of MQRC_CONNECTION_BROKEN (2009). This method works if your consumer and producer applications named differently. However, in our case we cannot use it as the application is named the same. When using this option you can take advantage of the client auto-reconnect functionality to allow your applications to automatically reconnect.

Option 2: Stop the channel

Another option is to execute a MQSC command to stop the channel your producer applications are connecting to. If you opt for this method your producer applications are likely to receive a return code of MQRC_CONNECTION_QUIESCING (2202). This method works if your consumer and producer applications connect using different channels, as stopping a channel will disconnect all applications connected via that channel. In our case as we are using the same channel for both so we cannot use it.

Option 3: Put inhibit the queue

Another option is to modify the queue definition to inhibit putting to the queue. This option will prevent any producer applications from placing more messages onto a queue, but allow consumers to still remove messages. If a producer attempts to place a message on a put inhibitied queue they will receive a return code of MQRC_PUT_INHIBITED (2051)

Option 4: Redesign your applications to reconnect after X minutes

Another option would be to modify your application design so they automatically reconnect after a set number of minutes. By disconnecting and reconnecting they would automatically get sent to an available queue manager without receiving any error messages. The problem with this approach is that it relies on your application developers to ensure this functionality is added in. If it is not, the application may never disconnect and instead have to be forcibly disconnected.

For all options you may need to consider redesigning your application to handle a return code differently. For our case option 3 is the best option. This option will cause applications to stop being able to put messages to the queues, but allow consuming applications to still remove messages. Run the following command to put inhibit the queue and prevent further messages from being put onto it:

printf "ALTER QLOCAL('DEV.QUEUE.1') PUT(DISABLED)\n" | kubectl exec -i mq-scale-4-ibm-mq-0 runmqsc

Once you have prevented producer applications from putting messages to the queue, you need to query the current depth of your queues and wait for them to become empty. This can be done using the following command:

printf "DISPLAY QLOCAL('DEV.QUEUE.1') CURDEPTH\n" | kubectl exec -i mq-scale-4-ibm-mq-0 runmqsc

Once the queue depth of your queues shows 0 you can safely remove the queue manager from the group by deleting the helm release. This can be done using the following commands to delete the helm release and then free the associated volume:

helm delete --tls --purge mq-scale-4
kubectl delete pvc data-mq-scale-4-ibm-mq-0

At this point the queue manager will have been deleted and storage freed. Your deployment will now consist of 3 queue managers instead of 4. In a production environment you may opt to not delete the volume as it could speed scaling up in the future. 

Conclusions

In conclusion, while scaling up the MQ certified container proves to be relatively simplistic, scaling down requires planning and careful execution of steps to ensure that you do not lose message data. 

Additionally although this method can habe minimal impact on applications, there is still a chance that producer applications will get interuptted. Your applications may need to handle new error cases in order to recover sufficiently. Due to this you should also consider using the automatic client reconnect functionality available to IBM MQ Clients to quickly reconnect clients that have been disconnected.

One main flaw with this design is that it requires you to carefully plan and manage your applications to ensure that they are evenly spread across your queue managers. A solution to this would be to use the new Uniform cluster feature added in version 9.1.2. This feature is supported when running in IBM Cloud private.

Tidying up after ourselves

Throughout this blog series we have created several deployments and resources. To remove all resources that were created you can run the following commands:

helm delete --tls --purge mq-scale-1 mq-scale-2 mq-scale-3
kubectl delete pvc data-mq-scale-1-ibm-mq-0 data-mq-scale-2-ibm-mq-0 data-mq-scale-3-ibm-mq-0

 

Statistics
0 Favorited
12 Views
1 Files
0 Shares
5 Downloads
Attachment(s)
pdf file
Certified container scaling part 2.pdf   1.30 MB   1 version
Uploaded - Wed March 04, 2020