Chirag,
Not really. This capability (or lack thereof, depending how you look at it) has been consistent from the beginning. In other words, it has always been implied that, for most situations, a service should be executed only once in a cluster.
There have been relatively recent changes to the IS to allow the same service to be executed in parallel in multiple servers in the cluster though, such as the fact that the scheduler now allows you to create a task to be executed on all servers. That scheduler capability doesn’t seem to fit well into this particular use case though (or perhaps it does - not sure how current the caches really have to be.)
Over the years, I have seen other implementations of custom, distributed caches within the Integration Server. The approach in those was slightly different than the one discussed in this thread. In those implementations, a list of all Integration Servers (or remote server alias) in the cluster was maintained in a configuration file and whenever a trigger event occurred (e.g. polling notification document is received), a service would loop through this list and it would invoke a specific service on each server to cause the caches to be refreshed.
A slight variation to this is to retrieve the list of clustered servers from the IS itself (e.g. by calling wm.server:getClusterNodes). I must say, however, that I think a solution that leverages messaging, as Olivier is attempting, is more robust.
Percio
#Adapters-and-E-Standards#Integration-Server-and-ESB#webMethods