Order Management & Fulfillment

Deploying IBM Sterling OMS using IBM Certified Containers on Google Kubernetes Engine (GKEs)

By MURTHY Tumkur posted Mon February 01, 2021 05:48 AM



IBM Certified Containers are better suited to run on the Red Hat OpenShift container platform, an enterprise-ready Kubernetes Container orchestration platform. However, customers may also choose to deploy their IBM Sterling Order Management solution on the Kubernetes platform offered by other cloud providers like Google, Azure, AWS, or Kubernetes deployed on-premises. To help, here we have documented our experience deploying IBM Sterling OMS using IBM Certified Containers on the Google Cloud platform’s Google Kubernetes Engine (GKE).

Cluster Design

Designing a Kubernetes (K8s) cluster to achieve the desired capacity is a very critical step. In addition to cost, which varies by deployment, consider the following aspects:

  • Management Overhead
  • Elasticity/High Availability
  • Efficient Resource Usage/Sizing

Management Overhead and Elasticity

When determining whether to proceed with either a small number of large worker nodes or a larger number of small worker nodes, it’s important to understand the advantages and disadvantages of each strategy. Going with larger worker nodes results in less overhead on the control plane (Controller, etc.) and less resource fragmentation. The drawback of this strategy is that it results in low elasticity. If there is a failure in one worker node, the K8s scheduler must reschedule a relatively large number of pods which requires additional time and resources. Going with the smaller worker nodes results in high elasticity and a smaller blast radius but generates higher overhead on the control plane and resource fragmentation.

Efficient Resource Usage and Sizing

To maximize resource utilization, create one cluster for the lower environments like Build, Development, MasterConfig, QA/UAT. Then, create a separate cluster each for Performance testing and Production. For the cluster that houses the lower environments, it is ideal to separate the workloads by creating different namespaces for each environment. When sizing, it is important to consider the following:

  • Number of Agent/Integration servers
  • Number of Application servers (considering the separate deployments of web applications, redundancy)
  • Kubernetes process overheads (kubelet, kubeproxy, daemonsets, etc.)
  • Deployment strategy (Rolling vs. Recreate. Rolling strategy keeps the old pods until the new pods’ deployment is complete.)
  • Dev ops process (CDT export/import, dbverify)

Creating a Cluster

You can work with the Google GKE either using the web console, Google shell, or local PC using gcloud CLI.

Start by creating a Google project for your OMS deployment. Create a new GKE cluster under the project from the web console. Choose the appropriate machine type and the number of nodes based on the desired size of the K8s cluster. Download and setup your gcloud CLI on your local PC. Obtain the service account credentials in the JSON form.

Activate the service-account user by executing the following command:

gcloud auth activate-service-account --key-file=<json key file>

Set the defaults in your local config by executing the following commands:

gcloud config set project <project id>

Execute the following command to list the GKE cluster created from the console:

gcloud container clusters list

Execute the following command to set up the Kubernetes authentication. This command sets up the $HOME/.  kube/config.

gcloud container clusters get-credentials <cluster id> --zone <zone where cluster is hosted>

Installing gcloud would also have installed the kubectl command. In case it is not installed, install the tool using the following command:

gcloud components install kubectl

Execute the following command to verify if your local PC has the kubectl command set up correctly and authenticated to the Kubernetes API:

kubectl get all

Create your namespace for different environments as below. Note: This is only for illustration. You could choose the names to suit your naming strategy.

kubectl create namespace dev
kubectl create namespace mc
kubectl create namespace qa

Install the helm command to install the OMS Helm charts. You can find the instructions to install helm charts here.


Ensure the IBM Db2 and IBM MQ are installed on separate Google compute engines and the Db2, and mqm services are accessible from the cluster.

Follow the Instructions at IBM Knowledge Center for configuring the IBM Db2 for OMS. Ensure that the services are within the same VPC or the VPCs are interconnected. Create a schema in Db2 to house the OMS application data. Have the hostname, database name, user, password, port, and schema name handy for the OMS installation.

Create an MQ bindings file with some temporary queues and copy it to your local PC. You could use an IBM MQ Docker to create the MQ bindings file.

Create an instance of Google Cloud Filestore. Make sure that you create the filestore instance in the same zone and region. Note the path and the server IP for the file store. Create a PersistenceVolume with the YAML as below:

apiVersion: v1
kind: PersistentVolume
  name: oms-pv
    storage: 50Gi
  - ReadWriteMany
    path: <path to nfs>
    server: <IP address>

Certified Container Images

Download the Certified Container images from the cp.icr.io. Follow the instructions to download the images to the local Docker Registry (through step 6).

Execute the following command if you want to push the OMS certified images into the Google Cloud Image registry. This step sets up the Docker login to the Google Image Registry.

gcloud auth configure-docker

Tag the downloaded image with the new tag to be pushed to the Google Image Registry. The region of the Registry could be changed appropriately based on your geo (US, EU, Asia, etc.)

docker tag om-base: <geo>.gcr.io/<project id>/om-base:
docker push <geo>.gcr.io/<project id>/om-base:

docker tag om-agent: <geo>.gcr.io/<project id>/om-agent:
docker push <geo>.gcr.io/<project id>/om-agent:

docker tag om-app: <geo>.gcr.io/<project id>/om-app:
docker push <geo>.gcr.io/<project id>/om-app:

Execute the following commands to verify the images are uploaded:

gcloud container images list-tags <geo>.gcr.io/<project id>/om-base

gcloud container images list-tags <geo>.gcr.io/<project id>/om-agent

gcloud container images list-tags <geo>.gcr.io/<project id>/om-app

Helm Charts
Download the IBM Sterling OMS Helm charts from the IBM Chart repository as described here. Choose the appropriate Helm chart from the professional or enterprise. Unzip the contents of the Helm charts archive into a folder on your local PC. Copy the values.yaml file into another file. You could keep separate values.yaml file for each environment (Dev, MasterConfig, QA, Perf, and Production).

Secret objects
Create the oms-secret K8s secret object that holds the Db2 password, liberty admin console password. Store the below content in a file (ex: oms-secret.yaml):

apiVersion: v1
kind: Secret
  name: 'oms-secret'
type: Opaque
  consoleadminpassword: '<liberty console admin password>'
  consolenonadminpassword: '<liberty console non admin password>'
  dbpassword: '<password for database user>'

Execute the following command to create the secret:

kubectl create -f oms-secret.yaml

mq bindings - ConfigMap

Create the mq-bindings configmap by executing the following command:

kubectl create configmap mq-bindings --from-file=.bindings=<path to the bindings file on local PC> -n <namespace>

Using the OMS Helm charts to create the K8s PVC may not work on the Google Cloud platform. Instead, move the file pvc.yaml out of the charts’ templates folder and create a PVC manually. The name of the PVC the charts would have created and expect to be present has the pattern:
<release-name>-ibm-oms-ent-prod-<persistence.claims.name from values.yaml>.

We can create the PVC with this name for the chart to consume when we install the chart. Save the following in a file (ex: oms-pvc.yaml):

kind: PersistentVolumeClaim
apiVersion: v1
  name: <release name>-ibm-oms-ent-prod-oms-common
    - ReadWriteMany
  storageClassName: ""
  volumeName: oms-pv
      storage: 50Gi

In this above example, we have chosen to use the out-of-the-box value for the global.persistence.claims.namewhich is oms-common. We also reference the PV we have created previously.

The following are the key entries in the values.yaml that need to be changed:

values.yaml entry







<geo>.gcr.io/<projec id>




hostname where Db2 is installed


Db2 server port Ex: 50000


Db2 server instance name Ex: OMDB


Db2 user name Ex: db2inst1




schema name in Upper Case


“mq-bindings” or name of the configmap defined above for MQ bindings

appserver.image.tag or the latest image tag


The name of the om-app image Ex: om-app


cloud.google.com/neg: '{"ingress": true}' This is required for Google GKE Ingress


false Set it to false for the initial setup. It could be modified later.




kubernetes.io/ingress.class: "gce" “gce” for External Ingress and “gce-internal” for Internal facing Ingress


The name of the om-agent image Ex: om-agent


The image tag name Ex:


During the initial deployment, this could be empty, as you would not have any agents/Integration servers. This could be modified later.


install If this is a first time install, this starts Datasetup Job. After the initial deployment, this entry should be changed so that the datasetup job is not started again.


install If this is first time install.


set it to 0 for the first time installation.


Review the pre-requisites mentioned in the OMS Helm chart documentation. Create and bind the Role to be able to execute the OMS Helm chart installation successfully. Execute the command to deploy OMS.

helm install <release name> -f <path to values.yaml> <path to charts folder> --timeout 3600 --namespace <namespace dev|mc|qa etc>

Deployment with an empty database will take time. You will notice that the App server and health-monitor’s pods will be in Pending status. These pods are waiting for the Datasetup job to complete executing the DDL and perform the loadFactoryData tasks. For the subsequent Helm upgrades, modify the values.yaml file to set the datasetup.loadFactoryData to  '  '  and datasetup.fixPack.loadFactoryData to  '  '.


Some of the standard operations to design and manage after the initial OMS deployments are:

  • OMS Build new Images with Customizations
  • OMS CDT Export from Master Config
  • OMS CDT Import into target Environments like Dev, QA, Perf, and Production
  • OMS dbVerify to create DataBase DDLs for target environments like Dev, QA, Perf, and Production
  • Creating new MQ bindings file with new Queue definitions

You can use the om-base container utilizing the IBM OMS Certified Containers source2image features to design these operations. The out-of-the-box features enable the om-base to clone the source repository and execute the build scripts from the source code repository. Please contact the IBM Expert Labs for more details on how to implement these operations into your CI/CD operations.


In our experience, by following the steps and recommendations outlined above, deploying IBM Sterling OMS on Google Cloud platform using GKE has been very easy and straightforward. The Helm charts provided along with the IBM Sterling Certified Containers work with very minimal changes.