Introduction
IBM Sterling Order Management is an omni-channel solution handling order, inventory, reverse logistics, delivery management and overall supply chain collaboration. This solution is available in a Certified Container edition delivered in a Continuous Delivery CI/CD model with pre-defined deployment patterns. The OMS containers are validated with the state-of-the-art IBM Kubernetes certification for security compliance and consistency with emerging standards for cloud deployment. These containers can be deployed on any cloud: public or private, and is compatible with industry leading tools like the RedHat OpenShift Container Platform.
Refer to product documentation for certified container support.
IBM GSI labs worked with Infosys to onboard a customer on the latest 10.0 version of Order Management on Microsoft’s Azure cloud platform.
This blog is co-authored by Haritha Thirumuru from Infosys
The best practices and guidance derived from that deployment of Sterling Order Management on Azure OpenShift are captured in this comprehensive tutorial.
Architecture Overview
IBM Sterling OMS containers are delivered as 3 images (om-base, om-app and om-agent) through the IBM Entitled Registry using licensed API keys that enable customers with an easier pull access to their local registries or CI/CD pipelines. The deployment charts are readily available in the RedHat OpenShift Helm Catalog.
- om-app — Order Management application server image handling synchronous traffic patterns embedded with IBM WebSphere® Liberty application server
- om-agent — Order Management workflow agent and integration server container to handle asynchronous traffic patterns
- om-base — Base Image provisioned on IBM Cloud Container Registry (Image Registry) and enabled for adding product extensions/customization to create a customized image
The below diagram depicts a high-level architecture used for deploying in Azure OpenShift:

Below are some of the considerations that need to be kept in mind for a production-ready deployment in Azure OpenShift:
- It is recommended that IBM DB2 and IBM MQ be deployed outside the OpenShift cluster, on Azure Virtual machines for better data storage patterns and for an elevated iops performance profile. This design also adds efficiencies in portability between on-premise, private cloud and public cloud footprints.
- NFS share is used for Persistent Volume storage for the pods - Azure NetApp is used as Network File Storage (NFS)
- Azure NetApp is an Azure component that need to be procured separately to create a NFS share. Azure NetApp supports NFS 4.1 protocol, which is recommended for IBM MQ.
- Custom images are deployed using the in-built IBM OMS helm charts from the OpenShift helm catalog
- Customized Application, agent and integration servers will be deployed as pods in OpenShift cluster. Client will access these pods through OpenShift Routes.
Prerequisites
- Create an Azure RedHat OpenShift (ARO) Cluster.
- Procure Azure NetApp and create NFS mount.
- Install DB2 on VM server outside of OpenShift cluster
- Install MQ on VM server outside of OpenShift cluster.
- Install Docker on VM build server.
- Copy the Helm binary to build server.
- Install Helm charts from the OpenShift console, provided by IBM. For further information please refer here
- Download the latest IBM OMS images from the image repository with IBM entitlement key using these instructions
- Create image-pull secret - This is required for connecting to Azure Container Registry to pull the images as part of Helm Deployment.

Give the Secret name, Image Registry URL, userid and password.
Execute the below command for linking the secret with service account.
oc secrets link default <secret-name> --for=pull
|
- Setup the oc command utility on the build server
- Once ARO cluster setup is complete, login to ARO console
- Download “oc client” for Windows/Linux by clicking the respective link as shown in the below screenshot

- Unzip the archive and one should be able to see “oc client”executable
Estimated time
Estimated execution time = 4-6 hours
Steps
High-level flow

ARO Cluster Setup
· If you have multiple Azure subscriptions, specify the relevant subscription ID:
az account set --subscription <subscription-id>
· Register the Microsoft.RedHatOpenShift resource provider:
az provider register -n Microsoft.RedHatOpenShift --wait
· Register the Microsoft.Compute resource provider:
az provider register -n Microsoft.Compute --wait
· Register the Microsoft.Storage resource provider:
az provider register -n Microsoft.Storage --wait
· Create Resource Group - Create a resource group for ARO
· Create VNET - Create a VNET for ARO
· Add an empty subnet for the master nodes and worker nodes.
· Disable subnet private endpoint policies on the master subnet. This is required to be able to connect and manage the cluster.
az network vnet subnet update \
--name <master subnet name> \
--resource-group <resource group name> \
--vnet-name <vnet name> \
--disable-private-link-service-network-policies true
· Create ARO cluster (Replace resource group name, vnet name, master subnet name, worker subnet name, domain name>
az aro create \
--resource-group <resource group name> \
--name <aro-cluster name> \
--vnet <vnet name> \
--master-subnet <master subnet name> \
--worker-subnet <worker subnet name> \
--pull-secret @pull-secret.txt \
--domain <domain name for prod> \
--master-vm-size Standard_D8s_v3 \
--worker-vm-size Standard_D16s_v3 \
--worker-count 5 \
--worker-vm-disk-size-gb 1024 \
--apiserver-visibility Private \
--ingress-visibility Private
|
Build Process
- Download the om-base image from IBM Cloud Container Registry using the entitlement key.
- Explode the container from base image.
- Update the sandbox.cfg file with database host, port, database name and schema within the container
- Run setup files command.
- Copy the customization and create custom jar for java code.
- Build the resources JAR and entities JAR
- Generate new images
- Load the images and create tag for new images.
- Push the custom images to Azure Container Registry.
docker run -e LICENSE=accept --privileged -v <shared file system directory path>:/opt/ssfs/shared -it --name <container name> <image>
docker exec -it <containerid> bash
Update the sandbox.cfg files under /opt/ssfs/runtime/properties
Execute ./setupfiles.sh
Copy the required custom xsl and xmls
Build resource jar
./deployer.sh -t resourcejar
Copy all the required java classes.
./install3rdParty.sh <classes> 1 -j /opt/ssfs/shared/<classes.jar> -targetJVM EVERY
Copy the Extensions.xml
Build entities.jar
./deployer.sh -t entitydeployer
Generate app, agent images
/opt/ssfs/runtime/container-scripts/imagebuild
./generateImages.sh --MODE=app,agent --DEV_MODE=true
Load Images, Tag Images and Push images to Registry.
docker load –i om-app_10.0.tar.gz
docker load –i om-agent_10.0.tar.gz
docker tag <imageid> <registryname>: <tagname>
docker push <registryname>: <tagname>
|
Deployment Process
- From OpenShift Console, create a project

- Manage Security constraints by providing access for users/groups to the service account
Ex: oc adm policy add-scc-to-user anyuid system:serviceaccount:<namespace>:default
- Create Global secret for data source connectivity details as mentioned in the Readme
- Create Role and Role Bindings. Role and RoleBinding are used to create Role Based Access Control for the default service account with the namespace. Refer to the Readme
- Create PV and PVC
PV (Persistent Volume) is the storage that has been provisioned for the application. PVC (Persistent Volume Claim) is the claim for storage that has been provisioned for the application.
For this implementation, the NFS file storage is used to create the PV and PVC.
Below are the sample yaml used to create the PV and PVC:
kind: PersistentVolume
apiVersion: v1
metadata:
name: oms-qa-pv
spec:
capacity:
storage: 10Gi
nfs:
server: <IP address>
path: <Path to NFS>
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
|
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: oms-qa-ibm-oms-pro-prod-oms-common
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: oms-qa-pv
storageClassName: ''
volumeMode: Filesystem
|
- Create Azure Container Registry Secret, as mentioned in the pre-requisite section.
- Edit values.yaml file with appsecret, db properties, customer overrides properties, agent, app tags and image registry properties. (Sample values.yaml here)
- Helm install – Use this command to deploy the pods to cluster.
- helm install --debug <namespace> -f <path to values.yaml> <release-name>
- Helm Upgrade – Use this command to update the pods with new changes
- helm upgrade <namespace> -f <path to values.yaml> <release-name>
Note : Set datasetup.loadFactoryData to install for the first time to run the datasetup job. Once helm install is executed and data setup pod is complete, set it to donotinstall or blank, so that the datasetup job isn’t invoked.
Set datasetup.fixPack.loadFPFactoryData to install and datasetup.fixPack.installedFPNo to 0 for initial installation only
Post-deployment activities
Single Sign-on:
Single Sign-On is implemented using Azure AD and using SAML tokens. An ACS (Assertion Consumer Service URL), also referred to as Reply URL is also configured in the IDP – this is the URL, where the application expects to receive the SAML token, and usually the OMS Home page.
Implement SSO in Sterling OMS by setting the right properties in customer overrides section of values.yaml and implementing the SingleSign On class, which would convert the SAMLResponse to XML. Ensure that the login Id returned in the SAML response is already configured as a UserId in Sterling OMS, for user to get authenticated
CI/CD Pipeline:
CI/CD DevOps is implemented by creating Jenkins jobs for CDT import/export and build/deploy image
SSL Certificates:
Below are the steps to be followed for any outbound external system integration from OMS
- Copy certificate to build server.
- Execute rsync to copy the certificate from build server to the appserver pod
oc rsync <sourcedir> <podname>: <sharedpath>
|
- Connect to pod through terminal session
- Go to NFS mount shared path.
- openssl command is used to convert certificatetype to pem from cer.
openssl x509 -in <cert>.cer -outform PEM -out <cert>.pem
|
- Copy the .pem file to shared path.
- Set permissions to the .pem file and restart appserver pod
External Domain:
An OpenShift route is a way to expose a service, by giving it an externally-reachable hostname. Routes that are created with external domain, would be used by all the inbound interfaces that will access OMS applications. The same route should be created for Production and a DR (Disaster Recovery instance) so that in case of a disaster recovery incident, all external systems will access the same URL without any change.
Only change would be the DNS switch-over from PROD to DR IP.

Azure Load Balancer configuration for DB2 clustering
In Azure, for IBM DB2 - HA Cluster Setup Virtual IP Configurations (VIP) will not work, as Virtual IPs are not accessible over network.
Azure Load Balancer should be created, and additional configurations are needed. All traffic from OMS application to DB would flow through the Azure Load Balancer. Do ensure that the LoadBalancer (LB) IP is set to be same as DB VIP.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-ibm-db2-luw
Azure LB works in Active-Active mode, whereas IBM DB2 is intended to work in Active-Passive mode. Now, Azure Load balancer could redirect traffic to any DB node, as ports are up in both DB nodes.
To ensure that DB2 remains passive on one node, configure a dummy port on both the nodes from OS front. Meanwhile, configure the backend script to bring up the port on PRIMARY node and bring down the port on STANDBY node. Health probe port on LB is set to dummy port, so that LB can route the traffic to only one node on which the port is up.
Summary
This tutorial detailed how to deploy Sterling Order Management as containers on Azure RedHat OpenShift cluster. It also specified the special considerations to be factored into the design of the deployment model for optimal performance. This tutorial also covered important pre-installation steps that ensure a smooth installation as well as post-deployment infrastructure tasks that enable flawless authentication, CI/CD devops practices and robust load-balancing configurations.
Many thanks to Viji Bashyam (vbashyam@us.ibm.com) and Sudhir Balebail (sudhir.balebail@us.ibm.com) for their help in reviewing and refining this tutorial.
Next Steps
Do refer to post-deployment tasks and other tasks for developing and deploying the custom code in containers.
IBM’s community of partners are leading the charge on crafting best practices and reusable implementation patterns and feeding it into the OMS product, online documentation and developer community blogs.
Please share your feedback and inputs to improve the efficacy of this tutorial.
#SupplyChain#OrderManagementandFulfillment