If you’ve been following the content in recent MQ CD releases, you’ll already be aware of Uniform Clusters - a pool of identical queue managers serving the same purpose and hosting the same objects, such that applications can move (or be rebalanced) between them for availability and scalability.
MQ 9.1.2 and 9.1.3 focused on introducing these new capabilities and expanding the types of application which can use them. While work on this continues in MQ 9.1.4 and beyond, we have also switched focus slightly to look at:
Simplifying administration of a uniform cluster
... and some new features which can also have wider reaching administrative benefits.
The core of a uniform cluster is a set of queue managers which are ‘identical’ - this means they must have all the same objects (queues, topics), connectivity (client/svrconn channels), security artifacts etc. so that the pool of applications connecting to them can rely on everything they need being available. They also need to be members of an MQ cluster for intercommunication purposes.
Initially creating such a configuration by hand is not too hard - and scripting creation of MQ objects is a well-trodden path after all. However, making sure the configuration stays in step between the queue managers is not so easy. And what about making changes - adding new queues say, or even a whole new queue manager?
To assist with these questions, MQ 9.1.4 provides several new facilities collectively known as ‘auto-configuration’. This can be broken down into three areas:
- Automatic MQSC script invocation,
- Auto-configuration for Queue Manager ‘ini’ settings, and
- (Uniform) Cluster specific auto-configuration.
MQSC and object definitions:
At queue manager creation, or by modifying qm.ini, you can now provide one or more MQSC files to be replayed on every queue manager start. (See the KnowledgeCenter here for details on the syntax).
You would probably choose to host these files centrally somewhere - this gives you an easy way to treat queue manager ‘config as code’ - managing key definitions for a whole pool of queue managers from one place, perhaps with version control.
As an example, after configuring your queue managers in this way, you could add a new queue to all queue managers with a single DEFINE command in this file, which will be picked up on next restart. While any MQSC can be issued in this way, in general, DEFINE … REPLACE commands are likely to be most useful, to enforce a ‘known’ configuration on all queue managers.
Queue Manager Ini:
At queue manager creation (or afterwards), you can similarly provide a file or files containing qm.ini definitions. Again, see the see the KnowledgeCenter for full details.
In the same way, once set up this can be used to effect changes across all the queue managers using the same file. For example, you could enable FASTPATH channels on all of your queue managers at next restart with a single edit.
But what does any of this have to do with uniform clusters? I’m getting to that…
Uniform Cluster specifics:
Both of the above features are available to any queue manager and can be very useful regardless of whether you are also exploiting uniform clusters. However, with some other unique UC additions they become even more powerful.
A new stanza can now be defined in the qm.ini named ‘AutoCluster’. An example AutoCluster stanza is provided in the KnowledgeCenter, and I’ll duplicate it here:
AutoCluster:
Repository2Conname=QMA.dnsname(1414)
Repository2Name=QMA
Repository1Conname=QMB.dnsname(1414)
Repository1Name=QMB
ClusterName=UNICLUS
Type=Uniform
‘Auto Clustering’ is a new concept in 9.1.4, and allows you to both initially define, and subsequently scale, your uniform cluster with an absolute minimum of additional configuration. The stanza in the ini file (which can now of course be shared to all queue managers through the templating mechanism described above), provides almost ALL the information a queue manager needs to join the cluster.
Only one other thing is needed - a CLUSRCVR definition. CLUSSDRS and queue manager REPOS settings are all derived and managed from this ini stanza.
And the CLUSRCVR definition can of course be centrally managed too, via the new MQSC support. But wait… if you’re familiar with MQ Clustering, you’ll know that this needs to advertise how other queue managers in the cluster are going to reach this queue manager (the CONNAME). That can’t be the same for every queue manager!
In most environments however, the CONNAME is likely to be easily available locally - we will need to know the port we are going to listen on one way or another, and we probably have our hostname available to us via environment variable or system command. Therefore, we just need a mechanism to make this available at the time the ‘template’ MQSC is processed. The solution provided is another new concept, an ‘ini variable’ provided at queue manager creation. With this to hand, we have everything we need to define queue manager as being part of this cluster, AND ensure it is in step with all the other members. Now we can simply add something like the following to our single central MQSC file, and pick up all the inserts as required for each queue manager:
define channel('+AUTOCL+_+QMNAME+') chltype(clusrcvr) trptype(tcp) conname('+<VARIABLE>+') cluster('+AUTOCL+') replace
Bringing it all together.
The above may sound a bit complicated, as I’ve whizzed through describing several new concepts all at once. Obviously, each of these new facilities has its own full documentation in the KnowledgeCenter which you can peruse at your leisure.
However, the important thing is that when you pull it all together, this really does vastly simplify creating or modifying your uniform cluster config. Let’s look at a quick summary of the steps:
- Decide where your full repositories are going to be hosted. Just like any other MQ cluster, you need to choose two queue managers which will help ‘manage’ the cluster. There doesn’t need to be anything particularly special about these (you just need to know they will exist).
- Define your ‘AutoCluster’ configuration in a standalone ini (template) file. Just copy and paste the one from above and modify to match your environment!
- Define your template CLUSRCVR in a standalone MQSC (template) file. Use +INSERT+ syntax to automatically pick up queue manager names, cluster names, and your new ‘CONNAME’ variable as appropriate (along with any other standard channel attributes you wish to configure). While you’re here, you can define any queues, topics, channels, authority records, etc. etc. needed by your applications - either in the same file, or in a number of files in the same directory.
(distribute these files however you wish to your queue manager hosts. If mounted/networked shared storage is available to you, this is likely to be a good option.)
- (Here’s the good bit). Create your queue managers! This will look something like going to their respective hosts-to-be and running:
crtmqm -p 1414 -ii /shared/uniclus.ini -ic /shared/uniclus.mqsc -iv CONNAME=QMA.dnsname(1414) QMA
crtmqm -p 1414 -ii /shared/uniclus.ini -ic /shared/uniclus.mqsc -iv CONNAME=QMB.dnsname(1414) QMB
crtmqm -p 1414 -ii /shared/uniclus.ini -ic /shared/uniclus.mqsc -iv CONNAME=QMC.dnsname(1414) QMC
[…]
The key thing here is that other than ensuring the queue manager name, hostname and port are unique to each crtmqm, no bespoke configuration is needed. It doesn’t matter whether we’re adding the 1st or 10th queue manager, each will pick up and apply the configuration appropriately. To add a new queue manager to the ‘pool’ and scale the application balancing in our cluster, simply identify a new host and play in the same configuration files. And when we come to make changes (define a new queue for example), just updating one central file is enough to ensure that on next restart every queue manager sees the same definitions.
This doesn’t solve all configuration issues entirely - for example removing a queue manager from the pool still requires significant manual intervention. Watch this space for much more to come in this area - but hopefully this has shown how with MQ 9.1.4 establishing and maintaining your Uniform Cluster just got a lot easier.