Introduction
The embedded Global Cache (GC) feature in IBM Integration Bus (IIB) and IBM App Connect Enterprise (ACE) provides an 'in-memory' caching solution which is based on WebSphere Extreme Scale (WXS) technology. Several IIB/ACE customers use this feature to store static data to minimize network/IO interactions to back-end systems which could be expensive in terms of performance.
While this works well for the traditional on-premises or VM style deployment of IIB/ACE, WebSphere Extreme Scale does not support embedded global cache in a Kubernetes environment, which means customers looking to migrate to IBM Cloud Pak for Integration or a Kubernetes based environment cannot move their existing topology, which is based on an embedded global cache feature, as-is to the container platform. They will instead need to refactor their GC topology to adopt a hybrid architecture where global cache components are hosted outside of the container platform. In this case, the integration server deployed to a container platform (for example, CP4I) will act only as Clients (i.e. the integration server running in CP4I will not host catalog or containers) to the GC server.
Refactoring Steps
An example of such hybrid topology is depicted in the figure below.
Figure 1. Global Cache topology across on-premises VM and CP4I/K8s environment
In the topology shown in Figure 1, an integration node is set up to run in an on-premises or VM environment. In this example, we have created 2 integration servers both of which act as catalog and container servers.
For a K8s/CP4I environment, we create one or more integration servers (single or multiple replicas) and configure it to connect to catalog and containers as defined on the VM environment.
The following are the configuration details that you will need to perform to setup such hybrid topology.
Server.conf yaml configuration for on-premises or VM based integration servers
This will host the catalog and containers components of the global cache.
Integration Server 1.
Integration Server 2.
Server.conf yaml configuration for integration servers running in CP4I/K8s
The integration server pods will make the client connection to catalogs and containers as defined above. Create the server.conf.yaml override with the following stanza:
Specify this server.conf.yaml as a Configuration object while deploying an integration server in the CP4I environment.
Points to note:
The values of the parameters catalogServiceEndPoints, catalogDomainName, catalogClusterEndPoints that are defined on server.conf.yaml for the integration server running in CP4I should be the same as that of server.conf.yaml running in on-premises/VM.
Make sure that the on-premises integration node is started and catalogs, containers are up and running before deploying the message flow to the integration server running in CP4I.
Once the deployment is successful and the integration server makes client connection to the catalog server, you should be able to see the following log message in the integration server pod log. The BIP7155I indicates that the integration server has successfully made client connection to the catalog server running in the on-premises environment.
Note:
- In order for such a topology to work, it is necessary that the on-premises server/VM is reachable from your OCP cluster.
- Another point regarding the performance implication: Currently there is no official performance study available of such hybrid architecture, however, it would highly depend on the network latency between OCP/K8s cluster (where integration flows are deployed) and the VM where global cache catalog/containers are hosted.
Testing and Verifying the Integration Flow
Create a simple flow that accesses global cache using a JavaCompute Node.
The sample code that creates a map in the cache and inserts a key-value pair in the map.
Deploy the bar file along with the server.conf.yaml configuration on your App Connect instance on CP4I.
Execute the message flow with a test message. On successful execution, you should be able to see the response message that says: 'Map Created Successfully'.
On-premises integration node, hosting catalog and containers, run the mqsicacheadmin command to confirm that the ‘testmap’ has been created successfully on the containers.
Here is an output:
$ mqsicacheadmin ASV12 -c showMapSizes
BIP7187I: Output from the mqsicacheadmin command. The output from the WebSphere eXtreme Scale xscmd utility is '
Starting at: 2021-09-03 02:32:54.626
CWXSI0068I: Executing command: showMapSizes
*** Displaying results for WMB data grid and mapSet map set.
*** Listing maps for MyCatalogServer ***
Map Name Partition Map Entries Used Bytes Shard Type Container
-------- --------- ----------- ---------- ---------- ---------
SYSTEM.BROKER.CACHE.CLIENTS 0 1 744 B Primary MyCatalogServer_C-1
SYSTEM.BROKER.CACHE.CLIENTS 6 1 736 B SynchronousReplica MyCatalogServer_C-1
SYSTEM.BROKER.CACHE.SERVERS 6 1 760 B SynchronousReplica MyCatalogServer_C-1
testmap 1 1 472 B SynchronousReplica MyCatalogServer_C-1
Server total: 4 (2 KB)
*** Listing maps for MyCatalogServer2 ***
Map Name Partition Map Entries Used Bytes Shard Type Container
-------- --------- ----------- ---------- ---------- ---------
SYSTEM.BROKER.CACHE.CLIENTS 0 1 744 B SynchronousReplica MyCatalogServer2_C-1
SYSTEM.BROKER.CACHE.CLIENTS 6 1 736 B Primary MyCatalogServer2_C-1
SYSTEM.BROKER.CACHE.SERVERS 6 1 760 B Primary MyCatalogServer2_C-1
testmap 1 1 472 B Primary MyCatalogServer2_C-1
Server total: 4 (2 KB)
Total catalog service domain count: 8 (5 KB)
(The used bytes statistics are accurate only when you are using simple objects or the COPY_TO_BYTES copy mode.)
CWXSI0040I: The showMapSizes command completed successfully.
Ending at: 2021-09-03 02:32:57.487
'
BIP8071I: Successful command completion.
This verifies that the hybrid topology for global cache has been set up correctly.
#UseCase
#Hybrid-Integration#AppConnectEnterprise(ACE)#Global-Cache#IntegrationBus(IIB)#Kubernetes#IBMCloudPakforIntegration(ICP4I)