You should be very careful about what you read into the performance report, or into the outome of a simple test of producing and consuming messages from singleton basic applications.
Each environment tends to have a single most restrictive bottleneck. Using fastpath bindings makes a big difference to the CPU usage, it typically avoids two thread context switches per MQI call. If you are CPU constrained, either across the entire system (multiple concurrent threads of execution) or on a single critical application context then switching to fastpath will have a big effect. If on the other hand you were primarily network constrained, or disk constrained then it would have minimal effect.
In order to maximize your performance you really need a good simulation of your production environment where you can measure the effect of individual changes.
Setting MQIBindType=FASTPATH has virtually no downside in a well managed environment (one where internal MQ processes are never terminated abruptly) and so it's generally good advice to enable this option. It may make no difference to the throughput in your system, but it is very likely to reduce your CPU usage.
I would NEVER recommend running user written applications in fastpath mode.
I'd probably start with an analysis of whether you are predominately using persisten or non-persistent messages. If persistent then I'd make sure the IO latency is low, or the degree of application concurrency is high. If non-persistent then looking at CPU usage and whether messages are being streamed to applications (minimise TCP line turn arounds) might be more relevant, again the degree of application concurrency is likely to have a very significant bearing on the throughput.
Applying a one size fits all approach to MQ performance tuning isn't generally a good idea.
------------------------------
Andrew Hickson
------------------------------
Original Message:
Sent: Thu September 15, 2022 05:03 AM
From: Armand
Subject: Publish-Subscribe performance report (Clarification about remote and local bounding)
Thanks Colin. I will make that test.
Best regards
------------------------------
armand
IT splecialis
ibm
madrid
Original Message:
Sent: Tue September 13, 2022 07:44 AM
From: Colin Paice
Subject: Publish-Subscribe performance report (Clarification about remote and local bounding)
You might want to consider a test of putting a message to a queue and getting the same message using client/local/ etc to see the effect of your network.
If you vary message size as well eg 1KB, 10 KB, 50KB, you will get a good feel of the network and how it impacts your performance.
Colin
------------------------------
Colin Paice
Original Message:
Sent: Thu August 25, 2022 11:59 AM
From: armand
Subject: Publish-Subscribe performance report (Clarification about remote and local bounding)
Hi
I would like to apply some recommendations of the Publish/Subscribe Performance report to one complex environment I manage.
The last one I found is for MQv8 here: "https://public.dhe.ibm.com/software/integration/support/supportpacs/individual/mp0e.pdf"
In it we see interesting statistics about peak rate. We see high difference whether or not we use client or local binding :
My concern is that in our environment the publishing applications are all hosted in many different servers (Windows) that also host a local QM to avoid connecting directly the client application to the central remote QM.
The central QM where all the topics and subscriptions are defined is on a central Linux MQ Server.
So for publishing from the app to the central QM, we use a PUT to an alias queue in the local Windows QM that is pointing to an alias of a topic on the central linux QM.
So if I refer to the tab displayed before, which row apply in our environment (type of application bindings) ? If it is local, how can I configure to use Fastpath ?
Best regards
Armand
------------------------------
armand
IT splecialis
ibm
madrid
------------------------------