I had some offline conversations with Jose regarding this topic and ran some test using his programs.
It turned out that the timing differences are probably caused by having too many parallel threads/processes running on the same machine.
To get a better idea of the performance implications of socket pooling I modified the Java client and server examples of EntireX. Instantiating the Broker object (as well as the BrokerService object) is now inside the inner loop.
Both the client and the server are running on the same Windows 2000 machine while the Broker is running on a different Windows 2000 machine is.
The average response time for 1 non-conversational call (total number of calls 1000)
is 2 ms when socket pooling is enabled and more then 400 ms when socket pooling is disabled.
This is not too surpising because in the first case 1 socket is used for all calls but in the second case 1000 socket connections are established and closed.
Kind regards,
Rolf
#EntireX#webMethods#Mainframe-Integration