I haven't tried it, but theoretically, using OpenShift Service Mesh should work with MQ. However, the last time I looked, there was no way of passing the certificate information into MQ - TLS is handled completely transparently by the service mesh, and MQ just gets an regular TCP connection. So this presents a problem for authentication and authorization, as MQ doesn't know who the connection is from. For HTTPS connections, ServiceMesh will flow information in the HTTP header, but it doesn't know how to do that with vanilla TLS connections. You could always flow a user/password in the MQ connect flow, but that removes some of the advantages.
My information may be out of date though, so this may be worth confirming.
------------------------------
Arthur Barr
Container Architect, IBM MQ
IBM
------------------------------
Original Message:
Sent: Wed February 09, 2022 03:54 AM
From: Ron Peereboom
Subject: MQ Operator internal workings
@Abu I don't know the answer to your question but I do not expect the operator to support that. Maybe someone else knows.
I am considering a different approach. The idea is to let Openshift service mesh (Istio) handle all certificate management and mTLS connections. The service mesh does have integrated certificate rotation and could use an external CA. This should work when I add all our ACE integrations and QMgrs to the same service mesh.
In my case I cannot put all our Qmgrs into the service mesh so there is still a need for mTLS to connect to those queue managers outside the mesh. Hopefully we can use Istio egress for this but I am not sure that is feasible.
Right now these are just some ideas, I have not yet tried this in reality.
Kind regards,
Ron Peereboom
------------------------------
Ron Peereboom
Original Message:
Sent: Tue February 08, 2022 09:44 AM
From: Abu Davis
Subject: MQ Operator internal workings
Thanks for the info. If we were to use our own CA signed certificates for MQ, how could we implement an automatic certificate rotation regime in accordance with the MQ operator, is this something that is supported?
------------------------------
Abu Davis
Original Message:
Sent: Thu January 20, 2022 06:25 AM
From: Arthur Barr
Subject: MQ Operator internal workings
The MQ Operator uses a container image based off the standard MQ sample image code. When the queue manager container is run for the first time with a specific PersistentVolume, it runs the "crtmqm" command, with the options "-ii /etc/mqm/" and "-ic /etc/mqm/". The MQ Operator mounts any MQSC or INI files supplied via ConfigMaps or Secrets, into the "/etc/mqm" directory. As described in the MQ documentation for Automatic configuration from an INI script at startup and Automatic configuration from an MQSC script at startup, the queue manager will process the supplied INI and MQSC files at "strmqm" time.
The queue manager only applies changes to the configured files at "strmqm" time, to prevent any errors from causing a partial update being applied. Changes therefore require a queue manager (Pod) restart. The MQ Operator will automatically restart the Pod for you if you change to use a different ConfigMap/Secret, or you can delete the Pod yourself to trigger a restart. Unfortunately, there's no way of manually triggering a Native HA rolling update, so the easiest approach here, I think, would be to force a rolling update by (say) changing a custom label or annotation for the Pods in your QueueManager YAML - the MQ Operator would need to trigger a rolling update to do this. Alternatively, you can of course apply changes immediately by doing your own "runmqsc" of the new files while the queue manager is running.
I hope this helps - please feel free to ask any more questions.
------------------------------
Arthur Barr
Container Architect, IBM MQ
IBM
Original Message:
Sent: Wed January 12, 2022 11:46 AM
From: Ron Peereboom
Subject: MQ Operator internal workings
Hi,
I would like to know how the MQ Operator works internally and I cannot find any information regarding that. What I am primarily interested in is how changes in MQSC configurations are handled.
It is fully possible to create a Native HA queue manager and run it in Openshift. I could then configure it further using remote MQSC or even log in to the pod and apply any necessary configuration. Over time there will be a need to add, modify and delete queues.
The preferred approach would be to have the full MQ configuration under version control (Configuration as Source). That whole configuration could then be loaded into OpenShift using our pipeline. I wonder how the operator will handle this. Is it going to create a whole new instance and deploy that? Will that result in downtime?
It would be great if someone could provide me with more info!
------------------------------
Kind regards,
Ron Peereboom
------------------------------