MQ

MQ

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only
  • 1.  Need example of real world use case

    Posted Tue January 24, 2017 06:09 PM

    Does anyone have an example of real world use case where you use MQ container in production? 



  • 2.  RE: Need example of real world use case

    Posted Tue January 24, 2017 07:06 PM

    Funny you should ask.  IBM presented on this at the NY/NJ MQ User Group last week and in their example you could pump out lots of containers but all would have a QMgr named QM1.  This is completely contradictory to the advice they've been giving us for years to make sure all QMgrs have different names.  A cluster uses the QMID but each QM1 in each container in this model probably also has the same QMID.   Message routing, the thing you really care about, uses only the QMgr name so even if the QMID were different nothing can route back to a given QM1 instance.

    I asked specifically about which use case they are trying to address in the example.  There was no specific use case cited, but assurances were made that some exist.  I'll believe it when I see it.  In the meantime...

    • If the things exchanging messages all meet at the same QMgr in the same container this would work.
    •  If you have an application that's going to fire-and-forget messages, and the destination can be reached over point-to-point channels this would work.  (Clustering is out due to the QMgr name resolution issue.  Reply messages are out since the QMgr name is ambiguous.)

    If you wanted to publish a service that listened on a well-known queue name, this model fails.  If you wanted high availability and for apps to switch back and forth across QMgrs, this fails. The number of use cases for which duplicate QMgrs fails are too numerous to count.  It's about as far as you can get from a Best Practice based on everything IBM's published to date on the subject.  Perhaps they will at some point publish some blog posts or docs that explain how this is useful and explain in detail when and how those use cases avoid all the problems with dupe QMgr names. 

     

    That said, if you are willing to build the QMgr as part of the first boot of the container, things are much more workable.  I have scripts installed at several of my customer sites that do this.  The MQ admin describes an MQ baseline configuration and one or more patterns to lay on top of it, and these are stored in a web-accessible repository.  When the container initializes, the MQ start script sees there are no QMgrs and queries the config database to discover what kind of QMgr goes there.  It then builds the QMgr, generates the cert, exchanges the cert with other QMgrs it needs to talk to, joins the cluster if required, sets up all the ACL's for the IDs that need to access it, builds the queues for the assigned apps, builds the queues for the monitoring solution, attaches the 3rd party web management console, and sends an email to the admin saying it's running.

    This method supports pretty much any MQ use case since it builds pre-defined patterns and you can make those look like anything.  Need a hub? Make a pattern.  Need a fast-recovery HA QMgr? Make a Pattern.  Need really big messages? Make a pattern.

    I met with IBM's presenter after the meeting to discuss all of this but they seem content for now to demo containers that spew out dozens of QM1 instances.  Looks like we'll have to solve this one on our own.

    Short answer - plenty of good use cases for MQ containers if you instrument it to build unique QMgrs.



  • 3.  RE: Need example of real world use case

    Posted Wed January 25, 2017 01:37 AM

    T.Rob

    Must admit, your examples and arguments make a lot more sense than creating containers with the same queue manager name in all of them.  I would be interested to see a simple script / demo / lab on how to just start two containers and just create the unique queue manager in each.  From there I guess the complexity on how to do different setups is up to the individual admins.



  • 4.  RE: Need example of real world use case

    Posted Wed January 25, 2017 11:20 AM

    I've been keeping configuration info in Amazon S3 because it turns out to be hard at many customer sites to set up a repository.  When I first did this at Bank of America I wrote a bunch of CGI to fetch the configs out of AppWatch.  These days IR-360 has all the web service access built in.  But the point is that it's best to have one and only one config repository and if there's a web tool available that holds config info, use that.

    Assuming an MQ config repository is available in some fashion, it's not hard to make the container go fetch the configuration info and set up a QMgr.  If you know in advance anything unique about the container it can fetch a pre-determined config.  If you do not then it can ask the configuration manager for any configurations that are not confirmed to have been created.  That way the MQ Admin specifies n new QMgrs, creates n new containers and as they initialize they each get the next pending configuration. 

    For containers it's usually one QMgr per.  For virtual servers it's often one or more QMgrs.  So for containers the initialization is likely contained in a "run once" script that sets up the QMgr and then disables itself.  For virtual servers the configuration can be in the MQ boot script such that it goes out to the config master and looks on each boot for it's configuration and makes the as-built match the as-specified by building any new QMgrs or objects it finds.  (I generally discourage automated deletion and instead prefer to have some human oversight for those operations.)

    I've heard and seen other sophisticated MQ config management  systems at shops who just wanted to speed up and make build operations more consistent and error free.  That they work great for various types of virtualization is a bonus since they often weren't built with that in mind.  I also know that IBM is loathe to do anything that would prevent their ability to prove authenticity and ownership of their code so perhaps they are intentionally ignoring all of the extant instrumentation.  What I can't figure out though is why they aren't at least saying "you should do this" instead of pitching a slide deck showing dozens of QM1 being pumped out and claiming it's a very useful way to do virtualization, especially when it contradicts decades of advice concerning dupe QMgr names.  The value of networks grows as the connection density increases so identically named QMgrs actually inhibit the ability to realize value out of MQ.



  • 5.  RE: Need example of real world use case

    Posted Thu January 26, 2017 07:13 AM

    Hi, I'm the presenter from the user group that T.Rob is referring to.  There seems to have been a misunderstanding.  When considering a Container as a Service (CaaS) system like Kubernetes, Docker Swarm, or Apache Mesos, then it is important to look for how stateful containers are managed.  It is common in many of these systems to scale a container by using multiple exact replicas.  This is something to watch out for, and to avoid, because it can lead to a situation where you have multiple copies of "QM1".  I think we are all agreed that is not what we want.  However, there are some CaaS systems which work better, in particular my demo was of Kubernetes, where you can scale in a way where every queue manager has a unique identity.  In my demo, I used a Kubernetes StatefulSet of queue managers, which created three queue managers, called "mq0", "mq1" and "mq2", which were all managed by one object.  I then adjusted the size of the StatefulSet to four queue managers, and "mq3" was added.

    In this particular demo, I showed an application where the clients which put messages didn't mind which instance of the queue manager they used.  This was because there was a single receiving application, which was load balanced across all the queue managers, which works as long as all queue managers have the same queue defined, and that message ordering is not critical.  You can architect your applications in a hundred different ways, and this was one.  An alternative would be to bind a different a receiving client application directly to each queue manager - in Kubernetes, you could do this by adding the client application to the same Pod as the queue manager.  Another architecture would be to define the queue managers in a cluster, and then MQ could do the routing internally.  There are many options here, and you have a lot of flexibility to design an architecture appropriate for your application.

    Something else to consider, is whether to deploy MQ objects (such as queues) required by your applications to a shared set of queue managers, or whether to deploy queue managers as part of your application.  Using the latter allows you to bake a much more fixed MQ configuration in at development time, which can be helpful for reliability, rather than varying configuration at runtime.  In this particular demo, this is what I did, with the configuration baked into the Docker image.  This won't work for all use cases of course, in which case, you could deploy multiple different baked-in configurations, or use a configuration management tool like Chef (or a custom tool like T.Rob describes).

    I hope that clears up any misunderstanding.  Please let me know if not, and I'd be happy to discuss this further.



  • 6.  RE: Need example of real world use case

    Posted Thu January 26, 2017 12:45 PM

    Thanks Arthur, that helps a lot.  As per our discussion after the presentation, hopefully that will make it into the deck.  And the KC.   When we first offered MQ HVE, the tooling did not make it easy to deploy unique QMgrs and we worked hard based on customer feedback to get that into the product and KC. Some of the tooling looks like the scripts I provided at the time to resolve this exact issue.

    But as I go from customer to customer and suggest some form of virtualization where their use case calls for it, many are reluctant to consider it based on what they find in the docs and online.  In one case a customer who presented at the conference a session about their advanced automation of WAS with Docker, Puppet and Chef would not even consider using IBM's MQ virtualization for their cloud implementation because - quoting the department exec - "IBM doesn't support that."

    In the presentation at NY/NJ MQ UG the "beware that this pumps out multiple QM1" part came across loud and clear.  If IBM has tooling that initializes the QMgrs to unique names or to predefined names to overcome that issue, it needs to be raised up front and center because based on comments in the room at the time, the title of this thread, and my (admittedly small sample size) anecdotal experience, people aren't seeing it. Looking over the most current Docker info in the KC, there is no discussion of the issue.  It assures the reader that "this feature can be a major benefit to continuous delivery in your enterprise" but if followed would crank out multiple instances of the same QMgr name.

    I'm not bashing what's there, but mainly because I have developed automation to make it work for real-world customers.  While it isn't a problem for me, many of my clients change their mind when they realize that the use cases supported out of the box are so niche that they can't use it without significant investment in tooling to manage QMgr uniqueness.  OP asks "what's the use case for this" but the better question may be "how can we use this to meet out most common use cases?"  Those are dots IBM should be connecting if MQ is to thrive in the cloud.