App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

The new Embedded Global Cache in IBM App Connect Enterprise 13.0.3.0

By Aaron Gashi posted Fri March 28, 2025 11:09 AM

  

What is the new Embedded Global Cache?

The Embedded Global Cache released in ACE 13.0.3.0 is a replacement for the existing WebSphere eXtreme Scale (WXS) embedded cache, now called the Embedded WXS grid, from previous versions of ACE/IIB. The Embedded WXS Grid, as per the statement of direction, is now deprecated, but will continue to work while Java 8 is still supported in ACE.

The Embedded Global Cache, like the Embedded WXS Grid before it, provides a way for you to store data that you want to reuse, whether that’s within the same flow, a different flow, or flows running in different integration servers, eliminating the need for alternative solutions such as a database.

However, the Embedded Global Cache can be used with Java 8 and 17, with simpler configuration and behaviour, and is supported for use in containers.

Configuring the new Embedded Global Cache

Within a single Integration Server

The embedded global cache is on by default, as it uses almost zero CPU and memory until it has data stored in it, and can function without any replication configured, much like the Local Cache. So unless you have configured your server to use WXS or a Local Cache by default, or you are creating your maps with a new RedisConnection Policy for an external Redis connection, any global maps you access will be using the embedded cache.

Replication across Integration Servers

The replication system for the embedded cache has three parts that can be configured. Below I have illustrated three servers, with their relevant server.conf.yaml settings, and how they could share their embedded cache. While secure replication is possible with the new Embedded Global Cache, I have omitted TLS configuration in the example below for the sake of brevity.

Example Cache Configuration with Three Servers

So within each integration server, message flows interact with the Embedded global cache in that server.

  1. replicateWritesTo - Integration Server 1 is configured to replicate cache writes from its own message flows to Integration Server 2, so any values put or updated by server 1’s message flows to server 1’s embedded cache will get asynchronously replicated to server 2. If multiple replicationWritesTo servers were configured, asynchronous write requests would be sent to all the configured integration servers.
  2. ReplicationListener - Integration Server 2 is configured to allow other servers to read and write to its own cache through the replication listener on port 7900.
  3. replicateReadsFrom - Integration Server 3 is configured to replicate any missing reads from server 2, i.e. if the value does not exist in server 3’s embedded cache, it will synchronously request that value from server 2 before continuing. If multiple replicationReadsFrom servers were specified, each server would be synchronously tried, in order, until a value was found, or all servers have been tried.

You can configure all of these within a single integration server, replicating its reads and writes to other servers, and listening to other servers requests to do the same.

Optimising the new Embedded Global Cache

In previous versions of ACE, the Embedded WXS Grid could just be optimised on or off in its entirety, and it required Java. The Embedded global cache however can be optimised more granularly.

If you have a dedicated cache server, that’s only used for cache replication without any flows deployed to it, that cache server can be optimised to just have the embedded cache and embedded replication backends enabled, which can be run without Java.

And if you have a server that is not configured to replicate its cache, or listen to replication requests, and just has its own message flows use the embedded cache, then the embedded cache replication backends can be optimised off while still letting your flows use the embedded cache within your server.

Administration of the new Embedded Global Cache

Administration of the new Embedded Global Cache is done through 2 new ibmint commands, ibmint display cache, and ibmint clear cache.

ibmint display cache

ibmint display cache is a new command that allows you to view the replication configuration of your embedded cache, the maps within it, and the number of keys in and amount of memory used by those maps.

For example, to get the replication settings for a server with an admin port of 7600:

replication settings

And with the ‘--all-maps’ flag we can see how many maps are currently in this server’s cache, how many keys they have, and how much memory they are using, for example:

replication settings

Or we can query specific maps, by repeating the ‘--map-name’ flag for each map we want to query, for example to query just myMap1 and myMap2:

replication settings

ibmint clear cache

ibmint clear cache is a new command that allows you to clear maps within one copy of the embedded global cache, for example:

example

Please note, this is not propagated to other servers that the cleared server may replicate to. So if you have a network of servers you want to clear out, you will have to repeat this command for each of them.

Migrating from an Embedded WXS Grid to the Embedded Global Cache

To migrate, you will have to update your configuration settings to use the new Embedded Global Cache as per Configuring the new Embedded Global Cache and update any usage of mqsicacheadmin to use the new ibmint commands under Administration of the new Embedded Global Cache.

Within a flow, how you use the global cache remains mostly unchanged. Flows that worked with the Embedded WXS Grid, should also work with the Embedded Global Cache, once your configuration and administration has been updated. There are no migration steps or changes in flow development required, just bear in mind the Flow development bevaviours below.

Behavioural considerations

While we have made every effort to make migrating to and using the Embedded Global Cache as simple and easy as possible, please find below the common behavioural considerations that we think you may encounter when migrating or developing with the Embedded Global Cache.

Replication behaviour

This simpler architecture has some differences with the embedded WXS cache, and when configuring your replication, consider the following behaviours.

Requests arriving via the listener don’t propagate

Only values written by a server's own message flows have their values replicated to that server's write list. If a value is written to 'server 4' via its listener from another server, that value will not be further propagated to 'server 4's write list.
Similarly, only values being read by a server’s own message flows will be requested from the replicatedReadsFrom servers if not found locally. If a read request arrives at 'server 4's listener, and 'server 4' does not have the value locally, it will not propagate the read request to 'server 4's read list.

Replication failure is not a flow failure

Requests to replicate a write are asynchronous, and it will not affect your flow if they fail.

Similarly, if a request to replicate a read fails it is treated as if the value was not found in that server.

Cache consistency

As you may have been able to tell from the architecture, unlike the Embedded WXS Grid, the Embedded Global Cache takes a simplistic approach to cache consistency. e.g. using the “Example Cache Configuration with Three Servers”, if server 1 puts the key “foo” with value “blah” into a map, that key and value will get replicated to server 2. But if server 2 then has its own message flow overwrite the key “foo” with the value “baz”, server 1 will not be updated.
So if you need a consistent cache across multiple servers, you will want them all to read and write to each other, or to a common server / set of servers.

Flow development behaviours

Thread behaviour with MbGlobalMapSessionPolicy’s

The thread behaviour with the MbGlobalMapSessionPolicy, which is used to specify the Time To Live (TTL) for values within that map, has changed.

Previously when using WXS, the last TTL value set when creating an MbGlobalMap on a given thread will be used by all MbGlobalMaps on that thread, for example:


MbGlobalMap map1 = MbGlobalMap.getGlobalMap("My.Map"lo, new MbGbalMapSessionPolicy(10));
map1.put("key1", "value"); // entry has a 10 second TTL

MbGlobalMap map2 = MbGlobalMap.getGlobalMap("My.Map", new MbGlobalMapSessionPolicy(20));

// Don't use map2, but it sets the TTL on the thread local handle

// Note, reusing the same map handle will use the last TTL value that was last set on this thread
map1.put("key2", "value"); // entry has a 20 second TTL(!)

MbGlobalMap map3 = MbGlobalMap.getGlobalMap("My.Map");
map3.put("key3", "value"); // entry also has a 20 second TTL

With the new embedded cache (and Redis), the TTL is specific to the map handle, for example:


MbGlobalMap map1 = MbGlobalMap.getGlobalMap("My.Map", new MbGlobalMapSessionPolicy(10));
map1.put("key1", "value"); // entry has a 10 second TTL

MbGlobalMap map2 = MbGlobalMap.getGlobalMap("My.Map", new MbGlobalMapSessionPolicy(20));

// Note, reusing the same map handle will use the handle specific TTL
map1.put("key2", "value"); // entry has a 10 second TTL

MbGlobalMap map3 = MbGlobalMap.getGlobalMap("My.Map");
map3.put("key3", "value"); // entry has no TTL

Getting an MbGlobalMap with a connection policy name

When you are getting a map within a flow, you can provide a policy name. In previous versions of ACE this could only be a WXS Server policy for connection to an external WXS grid, in ACE 13.0.3.0 this could also be a Redis Connection policy for connection to an external Redis server. Using either of these options will override the default cache type, and get you a connection to a cache of the respective type.

The default map type when getting an MbGlobalMap without a connection policy name

When you get a global map without a policy name, you will get map of the default cache type for that server. As of ACE 13.0.3 the default cache type is calculated with the following when the server starts up:

  • The default cache type starts as the ‘embedded’ global cache.
  • If you have the now deprecated “cacheOn: true” property set in the GlobalCache ResourceManager section of your server.conf.yaml, your default cache type will be overridden to Embedded WXS Grid.
  • If you have the now deprecated “defaultCacheType:” property set to “global” (for WXS) or “local”, your default cache type will be overridden to Embedded WXS Grid or Local respectively.
  • If you have the environment variable “MQSI_GLOBAL_CACHE_USE_LOCAL_CACHE” set in the integration server's environment, the default cache type is overridden to Local.

Known Issues

0 comments
60 views

Permalink