Hi Stan,
If there are only 2000 msg a day, I doubt the delay will be due to overload related slowness or some race condition. A million or two messages spread evenly over a day through other queues should not affect much. Most likely it will be one of following:
a) IS native trigger is used, so trigger delay kicks in and IS polls once every 2 seconds. Even though IS had left the callback on Broker, and Broker did deliver the message to IS right away, IS checked for message availability only after a second or so.
b) A custom Java/JMS client is used, and it is using polling mechanism.
c) JMS client (or IS JMS trigger) is used, it is using MaxReceive (or IS trigger prefetch) of 1 with a Broker Cluster. This max-receive=1 causes Broker JMS client to go into polling mode to get exactly one messages from exact one Broker from the cluster. This polling is again periodic and causes a second or two delay between publish and start of processing.
If you suspect load related issues, then use standard system monitoring and look for high cpu utilization (not overall, but few threads running 100%), high disk (say 60% or more utilization), and significant network utilization (say average several MB per sec average over days). From application side, you can check /diag.log and see if there are some response related entries there. A non-zero diag.log indicates some issue or another at some point of time since last restart. A consistent flow of response related entries in diag.log will indicate some ongoing issue.
#Universal-Messaging-Broker#webMethods