Sterling Managed File Transfer

 View Only

IBM Sterling B2B Integrator on Red Hat OpenShift Container Platform

By Nikesh Midha posted Wed February 19, 2020 12:31 PM

  

IBM Sterling B2B Integrator on Red Hat OpenShift Container Platform

Introduction to B2BI

IBM Sterling B2B Integrator helps companies integrate complex B2B EDI processes with their partner communities. Organizations get a single, flexible B2B platform that supports most communication protocols, helps secure your B2B network and data, and achieves high-availability operations. The offering enables companies to reduce costs by consolidating EDI and non-EDI any-to-any transmissions on a single B2B platform and helps automate B2B processes across enterprises, while providing governance and visibility over those processes. IBM Sterling B2B Integrator is delivered as an on-premise software offering that is typically deployed by customers as an on-site solution inside their firewall or deployed onto hosted infrastructure from a provider of their choice.

On 13 Dec 2019 IBM unveiled the IBM Sterling B2B Integrator Software Certified Containers and helm charts to be configured and deployed on Red Hat OpenShift Container Platform.

Introduction to RHOCP

Red Hat OpenShift helps you to manage developing, deploying, and managing container-based applications. It provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift platform, built on Red Hat Enterprise Linux and Kubernetes provides a more secure and scalable multi-tenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. While Kubernetes provides the cluster management and orchestrates containers on multiple hosts, OpenShift Container Platform adds:

  • Source code management, builds, and deployments for developers
  • Managing and promoting images at scale as they flow through your system by providing in-built Source-to-Image mechanism for CI/CD
  • Application management at scale
  • Team and user tracking for organizing a large enterprise deployment with built-in OAuth server supporting a range of identity providers
  • Networking infrastructure that supports the cluster with easy and powerful handling of routes for external traffic flows
  • Support for development workflows such as CI/CD pipeline, Source-to-Image
  • Templating and Catalog support
  • HA setup and operational support for backup and restore operations
  • Out of the box Prometheus monitoring of the cluster, nodes and individual containers and EFK logging service

IBM Sterling B2B Integrator Certified Container

IBM Sterling B2B Integrator software provides Red Hat certified docker images and IBM certified production grade helm charts with streamlined deployment instructions for OpenShift Container Platform. This deployment architecture helps to install the application with more ease (single click install) and flexibility (each B2BI runtime in its own docker container with independent ability to scale) along with an option to scale dynamically based on standard metrics for CPU/Memory usage thresholds, thus avoiding any manual intervention. The certified containers are provided for both Sterling B2B Integrator and Sterling File Gateway product installations.

In this technical paper, you will be given a preview on how to configure B2B Integrator on OpenShift, integrate with database and JMS server, deploy the application runtimes – noapp server, adapter container and liberty api servers with independently auto-scalable containers for production grade deployments.

What’s new in this version of IBM Sterling B2B Integrator?

This release comes with support for a new product installation paradigm in the form of Certified Containers which now can be installed on a containerized platform to save significantly on the operational cost and time. This install option is open to both existing customers willing to upgrade to the current version as well as new customers looking to install the product on the latest available technology stack and tools. It delivers the following:

  • Red Hat certified Sterling B2B Integrator container images with base OS as Red Hat Universal Base Image (UBI) 7.7
  • IBM certified Helm charts for enterprise-grade, secure and production editions
  • Automatically scale up and scale down your containers based on your load
  • Self-Healing capabilities enabled with multiple replicas, graceful recovery and health monitoring using liveness and readiness probes
  • Execute all of the above in Red Hat OpenShift container platform that offers a consistent hybrid cloud foundation

Red Hat certified Sterling B2B Integrator docker images offers multitude of benefits such as:

  • All components come from a trusted source
  • Platform packages have not been tampered with
  • Container image is free of known vulnerabilities in the platform components or layers
  • Container is compatible across Red Hat platforms and OpenShift, whether on bare metal or a cloud environment
  • The complete stack is commercially supported by Red Hat and our partners

IBM Certification of Sterling B2B Integrator containers and chart highlights below client values:

  • Enterprise Ready – Production Grade, Resilient, Scalable and Recoverable
  • Secure/Compliant – Image Vulnerabilities management, Limited security privileges and secure access
  • Flexibility – Create once and deploy anywhere OpenShift runs be it IBM Cloud, AWS, Azure or Google cloud platform providers
  • Speed – Accelerate time to business by focusing on your business values than building an IT stack
  • Reduce Costs – Automate and simplify the deployments

Pre-Requisites

This article assumes that you have reviewed the product documentation on needed pre-requisites and have installed at the minimum the below software:

  1. OpenShift version 3.11 as outlined here
  2. IBM DB2 database version 11.x or above
  3. IBM MQ JMS server version 9.0.0.7
  4. IBM Sterling B2B Integrator certified containers for Red Hat OpenShift. Detailed instructions for downloading Sterling B2B Integrator Certified Container are provided in the product documentation
  5. Installing Helm and Tiller projects in OpenShift

    Although Red Hat OpenShift container platform does not come with a helm chart enabled project by default, IBM provides 2 options
    a. Installing IBM Cloud Private Common Services as defined here
    b. Installing Tiller project directly on OpenShift by referring this blog from
    Red Hat

IBM Sterling B2B Integrator Certified Container Chart Details

As part of the Certified Containers Release there are two product offerings for Sterling B2B Integrator and Sterling File Gateway respectively named with appropriate part numbers on Passport Advantage as per the details available here.

For each of the product offering Certified Container part bundle, there are two sub-parts –

  1. IBM Certified Helm Charts – ibm-b2bi-prod-1.0.0.tgz or ibm-sfg-prod-1.0.0.tgz – as per the downloaded offering
  2. Red Hat Certified Docker Image – b2bi-603.tar or sfg-603.tar – as per the downloaded offering

Further steps to load the docker image (b2bi:6.0.3 or sfg:6.0.3) are available here.

The downloaded helm charts deploys IBM B2BI Sterling Integrator cluster on a Red Hat OpenShift Container Management Platform with the following resources

  1. Deployments/Statefulsets
    a. Application Server Independent (ASI) server with 1 replica by default
    b. Adapter Container (AC) server with 1 replica by default
    c. Liberty API server with 1 replica by default

  2. Services
    a. ASI service - This service is used to access ASI servers using a consistent IP address
    b. AC service - This service is used to access AC servers using a consistent IP address
    c. Liberty API service - This service is used to access API servers using a consistent IP

  3. ConfigMap – To map the application configurations to the application deployment

  4. Persisent Volume Claim(s) – To map external resources/files outside of application deployment

  5. Service Account, Role and Role Binding – If an existing Service Account is not provided

  6. Horizontal Pod Autoscaler – to scale deployments dynamically for each of the 3 deployments, if enabled

  7. Pod Disruption Budget – if enabled for each of the 3 deployments

  8. Monitoring Dashboard – A sample Grafana monitoring dashboard, if enabled

  9. Ingress – If enabled, a ingress resource is installed to setup external URLs for accessing product service endpoints

  10. Job – A database setup job to setup a new database or upgrade an existing database

Pre-Configurations required for deploying in OpenShift

Before starting to deploy IBM Sterling B2B Integrator containers in OpenShift you need to set up few pre-configurations as explained below:

  • Persistent Volume(s) – mountable file drives for referencing external resource files like database driver jar, JCE policy, trust stores etc or writing files like log files, documents and so on
  • Secrets – creates sensitive information like passwords, confidential credentials
  • Security Constraints – to finely control permissions/capabilities to deploy the charts
  • Configurations – Application configurations like license and security, database connectivity, JMS connection parameters along with environment specific configurations like logs, service ports, liveness and readiness parameters, pvcs , volume mounts, resources, autoscaling, node affinity etc for each application server.

Setting up persistent volume(s)

We need to create a persistent volume to provide the needed environment specific external resources like database drivers, JCE policy files, Key Stores and Trust Stores to enable establishing SSL connections to database server, MQ server and so on. We will tag or call this volume as resources volume for reference in this document.

Application logs can either be redirected to console, which is the recommended option, or written to a file system/storage location outside of the application containers, if desired, in which case an additional persistent volume will need to be created for logs. We will tag or call this volume as logs volume for reference in this document.

Similarly there might be a need to map additional volumes to externalize certain data generated by the application during runtime, for example documents. To accommodate additional persistent volumes, the application helm chart values yaml does provide extension points through extraVolumes and extraVolumeMounts (see configuration section for details) to extend the deployment to create additional persistent volume claims and volume mounts matching the additional persistent volumes.

On an OpenShift Container Platform–

  1. Create a PersistentVolume for resources with Access mode as ReadOnlyManyand Storage less than or equal to 100Mi (can vary based on set of jars/files to be provided externally).
  2. If applicable, Create a PersistentVolume for logs with Access mode as ReadWriteManyand Storage less than or equal to 1Gi (can vary based on the log files usage and purge intervals)
  3. If applicable, Create a PersistentVolume(s) for additional values with Access mode as suitable and storage limit as suitable for the applicable use case(s)

When creating persistent volumes, please make a note of the storage class and metadata labels. This will be required to configure the respective Persistent Volume Claim’s storage class and label selector in the helm chart configuration (please refer configuration section for more details) so that the claims get bound to the persistent volumes based on the match.

A few sample Persistent Volume templates have been bundled in the helm charts and are available under ./ibm-b2bi-prod (or ibm-sfg-prod)/ibm_cloud_pak/pak_extensions/pre-install/volume/resources-pv( or logs-pv).yaml. This is only for reference and can be modified as per the selected storage option/location/size and other available platform options. You can use the OpenShift command line or UI to create/update persistent volumes. The below command can be used to create a new persistent volume once the template has been defined as below:

Template:

kind: PersistentVolume

apiVersion: v1

metadata:

name: resources-pv

labels:

intent: resources

spec:

storageClassName: "standard"

capacity:

storage: 500Mi

accessModes:

- ReadOnlyMany

nfs:

server: 9.37.37.47

path: /mnt/nfs/data/b2bi_resources/

Command:

oc create -f /path/to/pv.yaml

For more information, see Configuring persistent volumes in OpenShift Container Platform.

Important Note – The application containers need to be given appropriate access to the shared storage mapped through persistent volumes using fsGroup or supplemental group ids which are configurable through helm values yaml (Please refer configuration section for more details).

Setting up Secrets

The secret resource/object provides a mechanism to hold sensitive information such as database password. You can create Secrets either using the Open Shift command line or the Platform Catalog UI using the Secret kind template.

You can create Secrets for the following password credentials used by the application:

  1. System Passphrase
  2. Database credentials which include - User Name and Password. Additionally if SSL connection is enabled for database - Truststore and Keystore Passwords
  3. JMS MQ server credentials - User Name and Password
  4. Liberty API server, If SSL/HTTPs is enabled - Keystore Password

A sample Secret template has been bundled in the helm charts and is available under ./ibm-b2bi-prod (or ibm-sfg-prod)/ibm_cloud_pak/pak_extensions/pre-install/secretapp-secrets.yaml. This is only for reference and can be modified as per the applicable environment credentials You can use the OpenShift command line or webconsole UI to create/update secrets. The below command can be used to create a new secret once the template has been defined as below:

Template:

apiVersion: v1

kind: Secret

metadata:

name: b2b-system-passphrase-secret

type: Opaque

stringData:

SYSTEM_PASSPHRASE: password

---

apiVersion: v1

kind: Secret

metadata:

name: b2b-db-secret

type: Opaque

stringData:

DB_USER: b2b603db

DB_PASSWORD: password

DB_TRUSTSTORE_PASSWORD: password

DB_KEYSTORE_PASSWORD: password

---

apiVersion: v1

kind: Secret

metadata:

name: b2b-jms-secret

type: Opaque

stringData:

JMS_USERNAME: jms

JMS_PASSWORD: password

JMS_KEYSTORE_PASSWORD: password

JMS_TRUSTSTORE_PASSWORD: password

---

apiVersion: v1

kind: Secret

metadata:

name: b2b-liberty-secret

type: Opaque

stringData:

LIBERTY_KEYSTORE_PASSWORD: password


Command
:

oc create -f /path/to/secrets.yaml

For more information, see Configuring secrets in OpenShift Container Platform.

Please Note that while using or referring the sample application Secret template, you can modify the secret names as required but the keywords used in the stringData section must be as defined in the sample template. For example: In case of the Secret for the system passphrase mentioned above, the stringData keyword must be SYSTEM_PASSPHRASE.

After Secrets are created, you need to specify the Secret names against the respective configuration fields in the helm chart configuration (Please refer the configuration section for more details) :

setupCfg:

systemPassphraseSecret

dbSecret

jmsSecret

libertySecret

Setting up Security Context for OpenShift Container Platform

Supported predefined SecurityContextConstraints name: `ibm-anyuid-scc`

This chart optionally defines a custom SecurityContextConstraints (on Red Hat OpenShift Container Platform) which is used to finely control the permissions/capabilities needed to deploy this chart. It is based on the predefined SecurityContextConstraint name: `ibm-restricted-scc` with extra required privileges.

- Custom SecurityContextConstraints definition:

apiVersion: security.openshift.io/v1

kind: SecurityContextConstraints

metadata:

name: ibm-b2bi-scc

labels:

app: "ibm-b2bi-scc"

allowHostDirVolumePlugin: false

allowHostIPC: false

allowHostNetwork: false

allowHostPID: false

allowHostPorts: false

privileged: false

allowPrivilegeEscalation: false

allowedCapabilities:

allowedFlexVolumes: []

allowedUnsafeSysctls: []

defaultAddCapabilities: []

defaultAllowPrivilegeEscalation: false

forbiddenSysctls:

- "*"

fsGroup:

type: MustRunAs

ranges:

- min: 1

max: 4294967294

readOnlyRootFilesystem: false

requiredDropCapabilities:

- MKNOD

- AUDIT_WRITE

- KILL

- NET_BIND_SERVICE

- NET_RAW

- FOWNER

- FSETID

- SYS_CHROOT

- SETFCAP

- SETPCAP

- CHOWN

- SETGID

- SETUID

- DAC_OVERRIDE

runAsUser:

type: MustRunAsNonRoot

# This can be customized for your host machine

seLinuxContext:

type: RunAsAny

# seLinuxOptions:

# level:

# user:

# role:

# type:

supplementalGroups:

type: MustRunAs

ranges:

- min: 1

max: 4294967294

# This can be customized for your host machine

volumes:

- configMap

- downwardAPI

- emptyDir

- persistentVolumeClaim

- projected

- secret

- nfs

priority: 0

- Custom ClusterRole for the custom SecurityContextConstraints:

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

name: "ibm-b2bi-scc"

labels:

app: "ibm-b2bi-scc"

rules:

- apiGroups:

- security.openshift.io

resourceNames:

- "ibm-b2bi-scc"

resources:

- securitycontextconstraints

verbs:

- use

From the command line, you can run the setup scripts included under ./ibm-b2bi-prod (or ibm-sfg-prod)/ibm_cloud_pak/pak_extensions/

As a cluster admin the pre-install script is located at:

- pre-install/clusterAdministration/createSecurityClusterPrereqs.sh

As team admin the namespace scoped pre-install script is located at:

- pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh

Setting up Helm Chart Configurations

Helm charts come bundled with a default configuration yaml – values.yaml – available under ./ibm-b2bi-prod (or ibm-sfg-prod)/values.yaml.

While installing the helm charts using command line you could update the configurations in this file or a suggested option will be to define a new values yaml and copy only the configuration sections which need to be overridden for the deployment environment setup.

The new values yaml can be provided to the helm install using the –values or -f install command options.

The complete list of configurations with a brief description and default value is below:

Parameter

Description

Default

global.image.repository

Repository for B2B docker images

global.image.tag

Docker image tag

6.0.3

global.image.pullPolicy

Pull policy for repository

IfNotPresent

global.image.pullSecret

Pull secret for repository access

Arch

Compatible platform architecture

x86_64

serviceAccount.create

Create custom defined service account

false

serviceAccount.name

Existing service account name

default

persistence.enabled

Enable storage access to persistent volumes

true

persistence.useDynamicProvisioning

Enable dynamic provisioning of persistent volumes

false

appResourcesPVC.name

Application resources persistent volume claim name

resources

appResourcesPVC.storageClassName

Resources persistent volume storage class name

appResourcesPVC.selector.label

Resources persistent volume selector label

intent

appResourcesPVC.selector.value

Resources persistent volume selector value

resources

appResourcesPVC.accessMode

Resources persistent volume access mode

ReadOnlyMany

appResourcesPVC.size

Resources persistent volume storage size

100 Mi

appLogsPVC.name

Application logs persistent volume claim name

logs

appLogsPVC.storageClassName

Logs persistent volume storage class name

appLogsPVC.selector.label

Logs persistent volume selector label

intent

appLogsPVC.selector.value

Logs persistent volume selector value

logs

appLogsPVC.accessMode

Logs persistent volume access mode

ReadWriteMany

appLogsPVC.size

Logs persistent volume storage size

500 Mi

security.supplementalGroups

Supplemental group id to access the persistent volume

5555

security.fsGroup

File system group id to access the persistent volume

1010

security.runAsUser

The User ID that needs to be run as by all containers

1010

ingress.enabled

Enable ingress resource

false

ingress.controller

Ingress controller class

nginx

ingress.annotations

Additional annotations for the ingress resource

ingress.hosts.extraHosts

Extra hosts for ingress resource

dataSetup.enabled

Enable database setup job execution

true

dataSetup.upgrade

Upgrade an older release

false

env.tz

Timezone for application runtime

UTC

env.license

view or accept license

accept

env.upgradeCompatibilityVerified

Indicate release upgrade compatibility verification done

false

env.apiHost

External host for API liberty server, if configured

env.apiPort

External port for API liberty server, if configured

env.apiSslPort

External SSL port for API liberty server, if configured

logs.enableAppLogOnConsole

Enable application logs redirection to pod console

true

setupCfg.basePort

Base/initial port for the application

50000

setupCfg.licenseAcceptEnableSfg

Consent for accepting license for Sterling File Gateway module

false

setupCfg.licenseAcceptEnableEbics

Consent for accepting license for EBICs module

false

setupCfg.licenseAcceptEnableFinancialServices

Consent for accepting license for EBICs client module

false

setupCfg.systemPassphraseSecret

System passphrase secret name

setupCfg.enableFipsMode

Enable FIPS mode

false

setupCfg.nistComplianceMode

NIST 800-131a compliance mode

off

setupCfg.dbVendor

Database vendor - DB2/Oracle/MSSQL

setupCfg.dbHost

Database host

setupCfg.dbPort

Database port

setupCfg.dbUser

Database user

setupCfg.dbData

Database schema name

setupCfg.dbDrivers

Database driver jar name

setupCfg.dbCreateSchema

Create/update database schema on install/upgrade

true

setupCfg.oracleUseServiceName

Use service name applicable if db vendor is Oracle

false

setupCfg.usessl

Enable SSL for database connection

false

setupCfg.dbTruststore

Database SSL connection truststore file name

setupCfg.dbKeystore

Database SSL connection keystore file name

setupCfg.dbSecret

Database user secret name

setupCfg.adminEmailAddress

Administrator email address

setupCfg.smtpHost

SMTP email server host

setupCfg.softStopTimeout

Timeout for soft stop

setupCfg.jmsVendor

JMS MQ Vendor

setupCfg.jmsConnectionFactory

MQ connection factory class name

setupCfg.jmsConnectionFactoryInstantiator

MQ connection factory creator class name

setupCfg.jmsQueueName

Queue name

setupCfg.jmsHost

MQ Server host

setupCfg.jmsPort

MQ Server port

setupCfg.jmsUser

MQ user name

setupCfg.jmsConnectionNameList

MQ connection name list

setupCfg.jmsChannel

MQ channel name

setupCfg.jmsEnableSsl

Enable SSL for MQ server connection

setupCfg.jmsKeystorePath

MQ SSL connection keystore path

setupCfg.jmsTruststorePath

MQ SSL connection truststore path

setupCfg.jmsCiphersuite

MQ SSL connection ciphersuite

setupCfg.jmsProtocol

MQ SSL connection protocol

TLSv1.2

setupCfg.jmsSecret

MQ user secret name

setupCfg.libertyKeystoreLocation

Liberty API server keystore location

setupCfg.libertyProtocol

Liberty API server SSL connection protocol

TLSv1.2

setupCfg.libertySecret

Liberty API server SSL connection secret name

setupCfg.libertyJvmOptions

Liberty API server JVM option

setupCfg.updateJcePolicyFile

Enable JCE policy file update

false

setupCfg.jcePolicyFile

JCE policy file name

asi.replicaCount

Application server independent(ASI) deployment replica count

1

asi.extraPorts

Extra container ports for pods

asi.service.type

Service type

NodePort

asi.service.ports.http.name

Service http port name

http

asi.service.ports.http.port

Service http port number

35000

asi.service.ports.http.targetPort

Service target port number or name on pod

http

asi.service.ports.http.nodePort

Service node port

30000

asi.service.ports.http.protocol

Service port connection protocol

TCP

asi.service.extraPorts

Extra ports for service

asi.livenessProbe.initialDelaySeconds

Livenessprobe initial delay in seconds

60

asi.livenessProbe.timeoutSeconds

Livenessprobe timeout in seconds

30

asi.livenessProbe.periodSeconds

Livenessprobe interval in seconds

60

asi.readinessProbe.initialDelaySeconds

ReadinessProbe initial delay in seconds

120

asi.readinessProbe.periodSeconds

ReadinessProbe timeout in seconds

5

asi.readinessProbe.periodSeconds

ReadinessProbe interval in seconds

60

asi.ingress.hosts.name

Host name for ingress resource

asi-server.local

asi.ingress.hosts.extraPaths

Extra paths for ingress resource

asi.ingress.tls.enabled

Enable TLS for ingress

asi.ingress.tls.secretName

TLS secret name

asi.extraPVCs

Extra volume claims

asi.extraVolumeMounts

Extra volume mounts

asi.extraInitContainers

Extra init containers

asi.resources

CPU/Memory resource requests/limits

asi.autoscaling.enabled

Enable autoscaling

false

asi.autoscaling.minReplicas

Minimum replicas for autoscaling

1

asi.autoscaling.maxReplicas

Maximum replicas for autoscaling

2

asi.autoscaling.targetCPUUtilizationPercentage

Target CPU utilization

60

asi.defaultPodDisruptionBudget.enabled

Enable default pod disruption budget

false

asi.defaultPodDisruptionBudget.minAvailable

Minimum available for pod disruption budget

1

asi.extraLabels

Extra labels

asi.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

asi.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

asi.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

asi.podAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

asi.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

asi.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

ac.replicaCount

Adapter Container server (ac) deployment replica count

1

ac.extraPorts

Extra container ports for pods

ac.service.type

Service type

NodePort

ac.service.ports.http.name

Service http port name

http

ac.service.ports.http.port

Service http port number

35001

ac.service.ports.http.targetPort

Service target port number or name on pod

http

ac.service.ports.http.nodePort

Service node port

30001

ac.service.ports.http.protocol

Service port connection protocol

TCP

ac.service.extraPorts

Extra ports for service

ac.livenessProbe.initialDelaySeconds

Livenessprobe initial delay in seconds

60

ac.livenessProbe.timeoutSeconds

Livenessprobe timeout in seconds

5

ac.livenessProbe.periodSeconds

Livenessprobe interval in seconds

60

ac.readinessProbe.initialDelaySeconds

ReadinessProbe initial delay in seconds

120

ac.readinessProbe.periodSeconds

ReadinessProbe timeout in seconds

5

ac.readinessProbe.periodSeconds

ReadinessProbe interval in seconds

60

ac.ingress.hosts.name

Host name for ingress resource

ac-server.local

ac.ingress.hosts.extraPaths

Extra paths for ingress resource

ac.ingress.tls.enabled

Enable TLS for ingress

ac.ingress.tls.secretName

TLS secret name

ac.extraPVCs

Extra volume claims

ac.extraVolumeMounts

Extra volume mounts

ac.extraInitContainers

Extra init containers

ac.resources

CPU/Memory resource requests/limits

ac.autoscaling.enabled

Enable autoscaling

false

ac.autoscaling.minReplicas

Minimum replicas for autoscaling

1

ac.autoscaling.maxReplicas

Maximum replicas for autoscaling

2

ac.autoscaling.targetCPUUtilizationPercentage

Target CPU utilization

60

ac.defaultPodDisruptionBudget.enabled

Enable default pod disruption budget

false

ac.defaultPodDisruptionBudget.minAvailable

Minimum available for pod disruption budget

1

ac.extraLabels

Extra labels

ac.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

ac.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

ac.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

ac.podAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

ac.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

ac.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

api.replicaCount

api.replicaCount

Liberty API server (API) deployment replica count

1

api.extraPorts

Extra container ports for pods

api.service.type

Service type

NodePort

api.service.ports.http.name

Service http port name

http

api.service.ports.http.port

Service http port number

35002

api.service.ports.http.targetPort

Service target port number or name on pod

http

api.service.ports.http.nodePort

Service node port

30002

api.service.ports.http.protocol

Service port connection protocol

TCP

api.service.ports.https.name

Service http port name

https

api.service.ports.https.port

Service http port number

35003

api.service.ports.https.targetPort

Service target port number or name on pod

https

api.service.ports.https.nodePort

Service node port

30003

api.service.ports.https.protocol

Service port connection protocol

TCP

api.service.extraPorts

Extra ports for service

api.livenessProbe.initialDelaySeconds

Livenessprobe initial delay in seconds

120

api.livenessProbe.timeoutSeconds

Livenessprobe timeout in seconds

5

api.livenessProbe.periodSeconds

Livenessprobe interval in seconds

60

api.readinessProbe.initialDelaySeconds

ReadinessProbe initial delay in seconds

120

api.readinessProbe.periodSeconds

ReadinessProbe timeout in seconds

5

api.readinessProbe.periodSeconds

ReadinessProbe interval in seconds

60

api.ingress.hosts.name

Host name for ingress resource

api-server.local

api.ingress.hosts.extraPaths

Extra paths for ingress resource

api.ingress.tls.enabled

Enable TLS for ingress

api.ingress.tls.secretName

TLS secret name

api.extraPVCs

Extra volume claims

api.extraVolumeMounts

Extra volume mounts

api.extraInitContainers

Extra init containers

api.resources

CPU/Memory resource requests/limits

api.autoscaling.enabled

Enable autoscaling

false

api.autoscaling.minReplicas

Minimum replicas for autoscaling

1

api.autoscaling.maxReplicas

Maximum replicas for autoscaling

2

api.autoscaling.targetCPUUtilizationPercentage

Target CPU utilization

60

api.defaultPodDisruptionBudget.enabled

Enable default pod disruption budget

false

api.defaultPodDisruptionBudget.minAvailable

Minimum available for pod disruption budget

1

api.extraLabels

Extra labels

api.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

api.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

api.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

api.podAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

api.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity”.

api.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution

k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section “Affinity and Tolerations”.

nameOverride

Chart resource short name override

fullnameOverride

Chart resource full name override

dashboard.enabled

Enable sample Grafana dashboard

false

test.image.repository

Repository for docker image used for helm test and cleanup

‘ibmcom’

test.image.name

helm test and cleanup docker image name

opencontent-common-utils

test.image.tag

helm test and cleanup docker image tag

1.1.4

test.image.pullPolicy

Pull policy for helm test image repository

IfNotPresent

Pre-installation Database Checks before Fresh Install or Upgrade :

The following pre-installation database checks need to be performed before proceeding with the installation:

  1. When installing the chart on a new database which does not have IBM Sterling B2B Integrator Software schema tables and metadata, ensure that `dataSetup.enable` parameter is set to `true` and `dataSetup.upgrade` parameter is set as `false`. This will create the required database tables and metadata in the database before installing the chart.

  1. When installing the chart on a database on an older release version ensure that `dataSetup.enable` parameter is set to `true` and `dataSetup.upgrade` parameter is set as `true`. This will upgrade the given database tables and metadata to the latest version.

  1. When installing the chart against a database which already has the Sterling B2B Integrator Software tables and factory meta data ensure that `datasetup.enable` parameter is set to `false`. This will avoid re-creating tables and overwriting factory data.

Resources Required for Application Pods/Containers

The CPU and memory resource allocation and limit for each pod can be defined in the values yaml. The following table captures the default/minimum usage and limits per pod which can be scaled up based on the expected application load and resource availability:

Pod

Memory Requested

Memory Limit

CPU Requested

CPU Limit

Application Server Independent (ASI) pod

4 Gi

8 Gi

2

4

Adapter Container (AC) pod

4 Gi

8 Gi

2

4

Liberty API server (API) pod

4 Gi

8 Gi

2

4

Installing Application using Helm

To install the application chart using helm with the release name of your choice, say my-release, ensure the chart is available locally and run the following command:

$ helm install --name my-release -f ./values.yaml ./ibm-b2bi-prod --timeout 3600 --tls --namespace <namespace>

Install options –

  1. --name – Release name. If unspecified, it will autogenerate one for you
  2. -f/--values – Configuration values yaml file
  3. --timeout - Time in seconds to wait for any individual Kubernetes operation during installation (like database setup job). If unspecified, It defaults to 300. The recommended value is 3600.
  4. --tls – Enables TLS for helm
  5. --namespace - Namespace to install the release into. Defaults to the current kube config namespace.

For more details on these and other install options, please refer the helm documentation.

Please Note - Depending on the container platform infrastructure and database network connectivity, the chart deployment can take an average of :

  • 2-3 minutes for installation against a pre-loaded database.
  • 20-30 minutes for installation against a new or older release upgrade with dataSetup.enabled set to true.

After installation, you can check for the ‘Ready’ status of the application server pods using the command

oc get pods

Once the pods are ready, you can acces the application based as below;

  1. If ingress.enabled is set to false and <ais/ac/api>.service.type is set to NodePort, the application URL can be accessed using any of the cluster public node IP and <asi/ac/api>.service.(http/https).nodePort (Please refer the configuration section for more details on these configurations). A sample URL construct is mentioned below:

Dashboard –

http://<ANY CLUSTER PUBLIC NODEIP>:<asi.service.http.nodePort>/dashboard

APIs –

http://<ANY CLUSTER PUBLIC NODEIP>:<api.service.http.nodePort>/B2BAPIs/svc

  1. If ingress.enabled is set to true, the application URL can be accessed using the ingress hosts defined for asi/ac/api ingress resource in the configurations. A sample URL construct is mentioned below:

Dashboard –

<http or https>://<asi.ingress.hosts.name>/dashboard

APIs –

<http or https>://<api.ingress.hosts.name>/B2BAPIs/svc

For automatic mapping details between Kubernetes ingress host and OpenShift Routes, please refer this.

Additional routes can be defined for custom application URL endpoints using OpenShift Routes. More details can be found here.

Upgrading the Application Release using Helm

You would want to upgrade your deployment release, say my-release, when you have a new docker image or helm chart version or a change in configuration, for e.g. new service ports to be exposed.

  1. Ensure that the chart is downloaded locally and available.

  1. Run the following command to upgrade your deployments.

helm upgrade my-release -f values.yaml ./ibm-b2bi-prod --timeout 3600 --tls

Deleting the Application Release using Helm

To uninstall/delete the `my-release` deployment run the command:

helm delete my-release --purge --tls

Since there are certain kubernetes resources created using the `pre-install` hook, helm delete command will try to delete them as a post delete activity. In case it fails to do so, you need to manually delete the following resources created by the chart:

  1. ConfigMap - <release name>-b2bi-config

oc delete configmap <release name>-b2bi-config

  1. PersistentVolumeClaim if persistence is enabled - <release name>-b2bi-resources-pvc

oc delete pvc <release name>-b2bi-resources-pvc

  1. PersistentVolumeClaim if persistence is enabled and enableAppLogOnConsole is set to false - <release name>-b2bi-logs-pvc

oc delete pvc <release name>-b2bi-logs-pvc

Note: You may also consider backing up the data created in the writable persistent volume storage locations and then deleting or updating them before any reuse, if required. Same evaluation can be done for secrets, if they are going to be of no longer in use.

Configuring Advanced Scheduling and Affinity

The chart provides various ways in the form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in OpenShift. Please refer the OpenShift documentation for details on advanced scheduling and pod affinity/anti-affinityand node affinity. The following configurations are available in the helm chart configuration values yaml to set up node and pod affinity/anti-affinity:

  1. Node affinity - This can be configured using parameters
    a. For ASI server deployment - `asi.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `asi.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution`
    b. For AC server deployment
    `ac.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `ac.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution`
    c. For API server deployment `api.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `api.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

Please Note that depending on the architecture preference selected for the parameter `arch`, a suitable value for node affinity is automatically appended in addition to the user provided values.

  1. Pod affinity - This can be configured using parameters
    a. For ASI server deployment `asi.podAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `asi.podAffinity.preferredDuringSchedulingIgnoredDuringExecution`
    b. For AC server deployment `ac.podAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `ac.podAffinity.preferredDuringSchedulingIgnoredDuringExecution`
    c. For API server deployment `api.podAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `api.podAffinity.preferredDuringSchedulingIgnoredDuringExecution`

  1. Pod anti-affinity - This can be configured using parameters
    a. For ASI server deployment `asi.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `asi.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution`
    b. For AC server deployment `ac.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `ac.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution`
    c. For API server deployment `api.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution`, `api.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution`

Depending on the value of the configuration parameter `podAntiAffinity.replicaNotOnSameNode a suitable value for pod anti-affinity is automatically appended in addition to the user provided values. This is to configure whether replicas of a pod should be scheduled on the same node. If the value is `prefer` then `podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` is automatically appended whereas if the value is `require` then`podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` is appended. If the value is blank, then no pod anti-affinity value is automatically appended. If the value is `prefer` then the weighting for the preference is set using the parameter `podAntiAffinity.weightForPreference` which should be specified in the range 1-100.

Enabling Self-Healing of your applications

Failures are inevitable and some of the failures seen in B2B containers could be due to:

  • Container crashes for any specific/unforeseen reason
  • Supporting applications such as DB2 or MQ are not reachable
  • Unscheduled/planned maintenance

IBM Sterling B2B Integrator application containers are equipped with the below capabilities that enables the application to recover gracefully or spin additional instances:

  • Multiple replicas – OOB, the product comes with 1 replica for each deployment – ASI/AC/API server. In your production instances it is recommended to have 2-3 replicas per deployment, based on the expected load and server components in use, so that if one of the pods go down for unspecified reason, the other replica in the deployment can continue to service your requests
  • Liveness Probe – This probe ensures to check the container on startup to validate if the container has successfully been brought up and also validates in frequent interval to see if the server is alive. If no signal is received in the interval duration the container is automatically restarted.
  • Readiness Probe – This probe ensures to check the container is ready to receive requests. In the case of say the ASI application server this validates if the dashboard login page is reachable.

The time intervals for liveness and readiness checks are configurable sing the values yaml (Please refer the configuration section for more details).

Enabling auto scaling of your applications

Once you validate the images work in your QA and Development cycles, the images are ready to be consumed in a staging or pre-production environment before the final push to production. In pre-production environments it is recommended to perform a production-like load tests to validate the performance, scalability and availability of the application

OpenShift provides recommended installation and host practices performed by a cluster administrator for scaling and performance of the pods. More information on their techniques provided can be read here.

IBM Sterling B2B Integrator product can be auto scaled by making use of the Horizontal Pod Autoscaler (HPA), a Kubernetes functionality delivered by OpenShift. As of now OpenShift allows auto scalability based on CPU and memory (beta) utilization. Autoscaling feature can be enabled and configured during install time using the ‘autoscaling’ section for each deployment – ASI/AC/API – in values yaml file (Please refer configuration section for more details)

Apart from install time, the ASI/AC/API server dpeloyments can be scaled at runtime by executing the below commands:

oc autoscale deployment <deployment name> --min=1 --max=2 --cpu-percent=80 -n <namespace>

Ex 1:

oc autoscale deployment <release-name>-b2bi-asi-server --min=3 --max=5 --cpu-percent=75 -n b2bi-namespace

Ex 2:

oc autoscale deployment <release-name>-b2bi-ac-server --min=2 --max=4 --cpu-percent=80 -n b2bi-namespace

In the above example, both the BB Integrator ASI and AC application server are auto scaled with a minimum instance when starting up those containers and can scale up to a maximum ceiling of 5 and 4 containers respectively when each of the container’s CPU utilization increases to the specified percentage.

However, keep in mind, this setting must be tested internally on your environment to identify the resource usage. HPA also should not be confused with the resource requests and limits set at the individual deployment level, as the latter is defined to ensure the pod can be started on a node within the request and limits.

Setting up Log Aggregation on OpenShift Container Platform

If log.enableAppLogToConsole configuration is enabled, each of the application pods redirect all application logs to respective pod’s console. To aggregate the logs, you can deploy the EFK stack on the OpenShift Container Platform as described in detail here.

Upgrade Considerations

To upgrade from v5.x installed on bare metal servers done using legacy IBM Installation Manager (IIM) to v6.0.3 Certified Containers on RHOCP, please follow the below steps:

  1. Setup RHOCP with all the pre-requisite software, tools and configurations as outlined in this document and product documentation.
  2. As part of the application configuration, set the databaseSetup.enable and databaseSetup.upgrade to true. This option will upgrade the database schema, tables and data from v5.x to v6.0.3 on installing the certified helm charts using command line or UI.
  3. Install the application containers on RHOCP using command line or UI options.
  4. Upload all supported customizations to the application using the customization UI or REST API. The following custom entry points are supported through customization
    a. Custom jars
    b. Custom Services
    c. Customer Override properties and User Defined property files
    d. User Exits
    e. Custom Protocol and Policies
    f. UI Branding

Any custom integration points not covered by the above and requiring manual file system updates are currently not supported.

Summary

This article had touched upon various aspects that are essential for implementing IBM Sterling B2B Integrator Certified Container on Red Hat OpenShift container platform. B2B Integrator containers orchestrated on OpenShift container platform with both Red Hat and IBM certified helm charts provide a secure, compliant, scalable, performant and enterprise-grade application so that you, as a customer, can focus on leveraging the B2B platform to integrate complex B2B EDI processes with your partner communitiesat scale.


#IBMSterlingB2BIntegratorandIBMSterlingFileGatewayDevelopers
#DataExchange
1 comment
109 views

Permalink

Comments

Thu August 27, 2020 05:08 PM

Hello Nikesh,

Thanks for this article on how to install Sterling Integrator on OpenShift.

I am considering exploring this stack as our next upgrade, so I will be following these instructions pretty closely as I proceed.

One thing I notice is that there is no mention of installing the Docker software. The OpenShift documentation for 3.11 suggests Docker 1.13 can be installed by a simple yum command:

https://docs.openshift.com/container-platform/3.11/install/host_preparation.html#installing-docker

But that is not the Docker version required by Sterling Integrator:
https://www.ibm.com/support/knowledgecenter/SS3JSW_6.0.3/planning/planning/integrator/SI_SystemRequirements_Docker.html

So somewhere in the steps laid out in this article, do you agree that there should be an additional step that involves installing Docker Enterprise 17.xx (assuming we have acquired the Enterprise license from Mirantis)?

Thanks,
Sreeni.