Funny you should ask. IBM presented on this at the NY/NJ MQ User Group last week and in their example you could pump out lots of containers but all would have a QMgr named QM1. This is completely contradictory to the advice they've been giving us for years to make sure all QMgrs have different names. A cluster uses the QMID but each QM1 in each container in this model probably also has the same QMID. Message routing, the thing you really care about, uses only the QMgr name so even if the QMID were different nothing can route back to a given QM1 instance.
I asked specifically about which use case they are trying to address in the example. There was no specific use case cited, but assurances were made that some exist. I'll believe it when I see it. In the meantime...
- If the things exchanging messages all meet at the same QMgr in the same container this would work.
- If you have an application that's going to fire-and-forget messages, and the destination can be reached over point-to-point channels this would work. (Clustering is out due to the QMgr name resolution issue. Reply messages are out since the QMgr name is ambiguous.)
If you wanted to publish a service that listened on a well-known queue name, this model fails. If you wanted high availability and for apps to switch back and forth across QMgrs, this fails. The number of use cases for which duplicate QMgrs fails are too numerous to count. It's about as far as you can get from a Best Practice based on everything IBM's published to date on the subject. Perhaps they will at some point publish some blog posts or docs that explain how this is useful and explain in detail when and how those use cases avoid all the problems with dupe QMgr names.
That said, if you are willing to build the QMgr as part of the first boot of the container, things are much more workable. I have scripts installed at several of my customer sites that do this. The MQ admin describes an MQ baseline configuration and one or more patterns to lay on top of it, and these are stored in a web-accessible repository. When the container initializes, the MQ start script sees there are no QMgrs and queries the config database to discover what kind of QMgr goes there. It then builds the QMgr, generates the cert, exchanges the cert with other QMgrs it needs to talk to, joins the cluster if required, sets up all the ACL's for the IDs that need to access it, builds the queues for the assigned apps, builds the queues for the monitoring solution, attaches the 3rd party web management console, and sends an email to the admin saying it's running.
This method supports pretty much any MQ use case since it builds pre-defined patterns and you can make those look like anything. Need a hub? Make a pattern. Need a fast-recovery HA QMgr? Make a Pattern. Need really big messages? Make a pattern.
I met with IBM's presenter after the meeting to discuss all of this but they seem content for now to demo containers that spew out dozens of QM1 instances. Looks like we'll have to solve this one on our own.
Short answer - plenty of good use cases for MQ containers if you instrument it to build unique QMgrs.