Good points by both Morag and Glen and it raises an issue with which most customers are familiar - namely how do they spot if there is a real problem (as opposed to a system design feature) and having spotted one, how much Problem Determination do they do themselves ?
Queue Depth is a great example to use.
Suppose a peek display (or possibly a performance event) reveals that an application queue has a depth - of say - 50 msgs.
Is this good/bad/ok ?
Only Application/System knowledge can answer this.
It may be that this is a trigger queue and someone has set a depth trigger of 50 - i.e. MQ will start a process to (hopefully) drain the queue when the depth reaches 50.
Or perhaps this is a really active queue in a busy system that has "burst" activity and what we are seeing is not a static depth of 50 msgs but an average depth with all the messages coming and going. As Glen has said, MSGAGE and QTIME will give an idea as to whether we have a healthy dynamic queue or not and if this is a target queue (i.e. someone should be getting from it) then IPPROCS is always worth checking.
If this is a remote queue, then perhaps there is a problem with the channel and as Morag knows better than me, there is rich information to be found in CHSTATUS especially XQMSGA and XQTIME.
And as an aside, what would Glen and Morag advise re.monitoring of SYSTEM.CLUSTER.TRANSMIT.QUEUE ?
Finally, an MQ practitioner I know with many years experience has always been slightly wary of using queue depth events on the basis that given the Async nature of MQ, if a depth event is generated when the queue is at 90% of some critical value, it is possible that by the time the monitoring program has processed this information, the queue may have reached the critical depth.
I cannot say that I have come across this issue, but I offer it here for what it is worth.