In our previous blog, we explored the capabilities of the new Embedded Global Cache introduced in IBM App Connect Enterprise (ACE) 13.0.3.0, and how to configure it for an Integration Server running in an on-prem VM based deployment of App Connect Enterprise. In this follow-up blog, we take it a step further by focusing on how to configure Embedded Global Cache for Integration Runtime running in a Kubernetes environment like RedHat OpenShift. This blog provides step-by-step instructions and key considerations to help you setup and running ACE on Red Hat OpenShift or another Kubernetes platform.
Example Cache Configuration with three Integration Runtimes

In each Integration Runtime, we deploy message flows that interact with the Embedded global cache in that server. Following configuration is setup in server.conf.yaml for each Integration Runtime:
Integration Runtime IR1
- replicateWritesTo : IntegrationRuntime IR1 is configured to replicate cache writes from its own message flows to IntegrationRuntime IR2, so any values put or updated by IR1’s message flows to IR1’s embedded cache will get asynchronously replicated to IR2. If multiple replicationWritesTo servers were configured, asynchronous write requests would be sent to all the configured integration servers.
- replicateReadsFrom : IR1 is configured to replicate any missing reads from server 2, i.e. if the value does not exist in IR1’s embedded cache, it will synchronously request that value from IR2 before continuing.
- ReplicationListener : IR1 is also configured to allow other servers to read and write to its own cache through the replication listener on port 7900.
Integration Runtime IR2
ReplicationListener : IntegrationRuntime IR2 is configured to allow other servers to read and write to its own cache through the replication listener on port 7900.
Integration Runtime IR3
- replicateWritesTo : IntegrationRuntime IR3 is configured to replicate cache writes from its own message flows to IntegrationRuntime IR1 and IntegrationRuntime IR2. so any values put or updated by IR3’s message flows to IR3’s embedded cache will get asynchronously replicated to IR1 & IR2.
- replicateReadsFrom : IR3 is configured to replicate any missing reads from IR1 and IR2, i.e. if the value does not exist in IR3’s embedded cache, it will synchronously request that value from IR1 & IR2 before continuing.
In the Kubernetes environment, each IntegrationRuntime runs as a POD. In order to, to communicate between the IRs for reads and writes of Cache , the ReplicationListener port must be exposed within the cluster.
You can do this at the time of creation of an IntegrationRuntime by additionally configuring the service on IntegrationRuntime CR.

You may also edit existing IR editing the YAML by adding following stanza under specs:

Once the IR is created / updated with service details, you can observe the port 7900 being available against that service, in this example, global-cache-eg1-ir.

You can then use the service name as the hostname to communicate between the pods over the exposed port. As you may have noticed, we used following hostname and port for configuration under ReplicationServers. Hostname takes the form : [service name].[namespace].svc
ReplicationServers:
server2:
Hostname: 'global-cache-eg2-ir.ace.svc'
EnableTLS: false
Port:
7900
You can now test your message flows by processing some messages that write data to cache in IR1 and retrieve/read the value in another message flow running in IR2 or IR3.
Administering Cache in containers
You can use global-cache resource manager to query the Maps statistics
curl --unix-socket /home/aceuser/ace-server/config/IntegrationServer.uds
http://localhost/apiv2/resource-managers/global-cache/maps?depth=3

You can also view it via ACE Dashboard UI

Considerations while using Embedded Global Cache in Containers
In a container environment , it is recommended to have a 'mesh' setup with every IR configured to read and write to all the other IRs so that cache data is up to date in all copies of it in all IRs. One possible risk is, if you have a cache item you update regularly, without any expiry and with an unreliable connection, you may end up with different servers having different values for that cache item, where an update managed to write the update to e.g. 3 out of 4 other servers, but the connection to the 4th server got interrupted, so server 4 still has the old value, and because server 4 has a value, it doesn't ask the other servers what the value is.
If you have multiple replicas of your IR Pod, the cache values can be different in each replica depending on how the messages get routed to each replica. This factor should be kept in mind while designing Cache topology in a multi replica scenario.