Sterling B2B Integration

Sterling B2B Integration

Come for answers, stay for best practices. All we're missing is you.

 View Only

Upgrading IBM B2Bi/SFG Version 6.1.2.x to Version 6.2.0.x Using Certified Containers

By Connor McGoey posted Fri February 28, 2025 09:28 AM

  

Upgrading IBM B2Bi/SFG Version 6.1.2.x to Version 6.2.0.x Using Certified Containers

Table of Contents

Introductory Notes

Helm

Backing Up the Database

(Optional) Scaling Up the B2Bi/SFG Deployment

Pulling B2Bi/SFG 6.2.0.3 Images from ICR

A Note on B2Bi/SFG Database Upgrade Values

Configuring a B2Bi/SFG 6.2.0.3 Override File and Upgrading

Resources

Acronyms

Introductory Notes

Products

IBM Sterling Business to Business Integrator (B2Bi) "streamlines complex B2B and electronic data interchange (EDI) processes across partner communities within a unified gateway".

IBM Sterling File Gateway (SFG) is a file transfer consolidation system that provides scalability and security. SFG can intelligently monitor, administer, route and transform high volumes of inbound and outbound files. 

Intent

The purpose of this blog is to provide non-production details on how to upgrade IBM B2Bi/SFG from version 6.1.2.x to version 6.2.0.x using certified containers. If your deployments need specific information not covered in this blog, refer to the documents for B2Bi/SFG for more detailed information with regards to your deployments' needs.

Additionally, this blog will cover the process of pulling the necessary B2Bi/SFG images from the IBM container registry, tagging them, and then pushing the images to a private container registry.

It will also cover upgrading B2Bi/SFG with a 2-node upgrade to simulate a rolling upgrade. Note that using a 2-node upgrade is not necessary for upgrading the product. The steps in the blog specific to a 2-node upgrade are outlined and can be ignored if your installation / upgrade is single-node.

A Note on B2Bi/SFG Versions

This blog will use B2Bi/SFG versions 6.1.2.6 and 6.2.0.3 to demonstrate a concrete working example of upgrading B2Bi/SFG. However, the general steps outlined should work for across multiple 6.1.2.x and 6.2.0.x versions.

Presumptions

Prior to following the upgrade steps in this blog, it is important to note that the environment and resulting deployments should not be used to replicate and/or produce a production environment for B2Bi/SFG. Additionally, a few presumptions are made with regards to the upgrade and its steps:

      • The existing B2Bi/SFG deployment and its to-be upgraded deployment are not air-gapped as they do not use any secure connection configurations or options. For more information on B2Bi/SFG secure connections, refer to the B2Bi/SFG documents for necessary steps.
      • An SFG or B2Bi deployment must exist prior to upgrading and be accessible from your cluster. In my case, I will use the SFG deployment previously deployed in my blog "Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container"
      • These steps cover upgrading a DB2 database. If you are using a different database, refer to its documentation for steps on database back-ups.
      • These instructions were developed on an OpenShift cluster running in the IBM Cloud. However, kubectl commands have also been provided and the instructions should work in Kubernetes as well.
      • Docker Desktop, Podman, or some other local containerization tool must be installed to pull, tag, and push the B2Bi/SFG 6.2.0.3 images.
      • If you choose not to use a private container registry, note that the Helm releases pull images for the deployments from the IBM container registry. Steps for configuring your development environment to pull the necessary images are referenced in the "Prerequisites" subsection of the B2Bi/SFG Documents for Upgrading using Certified Containers.
      • The B2Bi/SFG Helm chart is version 3.0.5 which uses B2Bi/SFG version 6.2.0.3.

Deployment / Configuration Order

Before upgrading B2Bi/SFG, I will need to ensure my current deployment and resources are ready. Below are the ordered steps I will take to perform the upgrade:

    1. (Optional) Scale my current 6.1.2.x B2Bi/SFG deployment to 2-nodes.
    2. Backup my B2Bi/SFG database. Before upgrading I will back up my database so that I have a database to rollback to in the case of a failed upgrade. 
    3. Pull, tag, and push the B2Bi/SFG 6.2.0.x images required to upgrade.
    4. Configure an override.yaml file to be used in conjunction with the values.yaml file provided in the B2Bi/SFG Helm chart.
    5. Perform the upgrade using Helm CLI commands.

Helm

This blog uses Helm v3.14.3 but will work with any Helm version >= 3.13.x. With regards to B2Bi or SFG, using a different/older version of Helm may change how the charts should be handled. For issues regarding Helm, refer to the Helm documentation on how to install Helm and the commands available to you via the Helm CLI.

The IBM B2Bi 3.0.5 Helm chart and SFG 3.0.5 Helm chart are available under the Resources subsection.

For the purposes of this blog, Helm allows for automating the containerized upgrade of a B2Bi/SFG release when used in conjunction with a provided and configurable YAML file and Helm chart. The YAML file is used to define relative configuration for the charts. 

(Optional) Scaling Up the B2Bi/SFG Deployment

Before upgrading, I will need to scale my B2Bi/SFG statefulsets to 2 nodes so that the upgrade is a rolling upgrade.

To do this, I'll first get the name of all my B2Bi/SFG statefulsets by running either of the following commands from within my B2Bi/SFG namespace:

oc get statefulsets

kubectl get statefulsets

Because my SFG release is named my-sfg-release, the statefulsets are named:

    • my-sfg-release-sfg-ac-server
    • my-sfg-release-sfg-api-server
    • my-sfg-release-sfg-asi-server

For each statefulset, I will run a command of the following format to scale them to 2-pods each:

kubectl scale statefulset <ac/api/asi statefulset name> --replicas 2

I can verify that the statefulsets were scaled up by again running either of the following commands:

oc get statefulsets

kubectl get statefulsets

Next each of my B2Bi/SFG statefulsets I see 2/2 in the READY column:

NAME                          READY   
b2b-release-b2bi-ac-server     2/2    
b2b-release-b2bi-api-server    2/2     
b2b-release-b2bi-asi-server    2/2    

Backing Up the Database

Notes

This environment is just one example of a database configuration where DB2 is also running in a containerized environment within the same cluster. Different databases and database configurations may have different steps on backing them up.

To do online backups of a DB2 database with archive logging (automatically backing up transaction logs), you must first enable archive logging and perform an offline backup.

To backup the database I will first need to note the mount path of the database PVC mounted to the DB2 instance. For me, my database PVC is mounted to the pod under /database.

I'll then access my DB2 instance. Recalling from the DB2 setup steps from my B2Bi/SFG installation blog, I can access my DB2 pod from within its namespace by running:

kubectl exec --stdin --tty <DB2 POD> -- /bin/bash

And then switching to my DB2 user by running:

su - db2inst1

Next, I'll check the status of archive logging by running the following command:

db2 get db cfg for <DB Name> | grep LOGARCHMETH

If the first archive method contains a value other than OFF, skip to the steps on creating an online backup. The output for me is:

First log archive method                        (LOGARCHMETH1) = OFF
Second log archive method                       (LOGARCHMETH2) = OFF

I'll create a directory for storing my backups under my /database directory:

mkdir /database/online_backup

Then, as the root user of the DB2 pod, I will run the following command to provide permissions to the DB2 user and DB2 admin:

su - <DB2 Pod Root>

chown db2inst1:db2iadm1 /database/online_backup

I'll then switch back to the DB2 user and update the database configuration to set the first archive method (replace <DB Name> with the name of your B2Bi/SFG database):

su - db2inst1

db2 update database configuration for <DB Name> using LOGARCHMETH1 'disk:/database/online_backup'

I can confirm the change took place by rerunning the following command:

db2 get db cfg for <DB Name> | grep LOGARCHMETH

My output confirms the change:

First log archive method                      (LOGARCHMETH1) = DISK:/database/online_backup/
Second log archive method                     (LOGARCHMETH2) = OFF

I can now proceed with making the necessary offline backup. I'll run the following commands to force all applications off my database, deactivate it, and backup the database to my /database/online_backup directory:

db2 force application all

db2 deactivate db <DB Name>

db2 backup database <DB Name> to /database/online_backup

I verify the backup is there by finding its file name and running the db2ckbkp command:

ls /database/online_backup

db2ckbkp /database/online_backup/<Backup File Name>

I then get the output to verify the backup:

Image Verification Complete - successful

To reactivate my database, I run:

db2 activate db <DB Name>

All future backups can now be performed online without shutting down the database. The command I'll run to create an online backup is:

db2 backup database <DB Name> online to /database/online_backup/ compress include logs

I can verify the backup file by using the same commands used to check my previous offline backup.

Pulling B2Bi/SFG 6.2.0.3 Images from ICR

I will now download all the B2Bi/SFG 6.2.0.3 images, tag, and push them into my private registry. The following steps assume you have an entitlement key and can login to cp.icr.io using that key. Steps for doing this are in the B2Bi/SFG documentation page for downloading the container images.

After logging in to ICR, I will run the following commands to pull the necessary images to my local machine:

docker pull --platform=linux/amd64 cp.icr.io/cp/opencontent-common-utils:1.1.66
docker pull --platform=linux/amd64 cp.icr.io/cp/ibm-b2bi/b2bi:6.2.0.3
docker pull --platform=linux/amd64 cp.icr.io/cp/ibm-b2bi/b2bi-dbsetup:6.2.0.3
docker pull --platform=linux/amd64 cp.icr.io/cp/ibm-b2bi/b2bi-purge:6.2.0.3
docker pull --platform=linux/amd64 cp.icr.io/cp/ibm-b2bi/b2bi-ps:6.2.0.3
docker pull --platform=linux/amd64 cp.icr.io/cp/ibm-b2bi/b2bi-resources:6.2.0.3
docker pull --platform=linux/amd64 cp.icr.io/cp/ibm-b2bi/b2bi-documentservice:1.0.0.3

After pulling all the necessary images, I will tag each according to my private registry prefix format. The prefix I will use for all the images is us.icr.io/b2bsfg/upgrade. So, I will perform a docker tag command for each image in the format of:

docker tag <Image Name>:<Image Tag> us.icr.io/b2bsfg/upgrade/<Image>:<Tag>

For example, for the cp.icr.io/cp/ibm-b2bi/b2bi-purge:6.2.0.3 image I will run:

docker tag cp.icr.io/cp/ibm-b2bi/b2bi-purge:6.2.0.3 us.icr.io/b2bsfg/upgrade/b2bi-purge:6.2.0.3

Finally, Ill push each newly tagged image to my us.icr.io image repository by running:

Note: you can optionally verify the image signatures. I will remove them with the --remove-signatures option for my upgrade.

docker push <Newly Tagged Image Name>:<Tag> --remove-signatures

Ex:

docker push us.icr.io/b2bsfg/upgrade/b2bi-purge:6.2.0.3 --remove-signatures

In the B2Bi/SFG Helm charts an image's digest takes precedence over the tag if both are present. So, to ensure that I am using the correct image I will be using the images repo/name in combination with the tags and leaving all the digests empty.

A Note on B2Bi/SFG Database Upgrade Values

The upcoming section "Configuring a B2Bi/SFG 6.2.0.3 Override File and Upgrading" will go over the steps needed to upgrade B2Bi/SFG from 6.1.2.x to 6.2.0.x and new and/or changed fields in the 6.2.0.3 Helm chart values.yaml file.

However, for any B2Bi/SFG upgrade it is important to note what the following database fields mean and how changing them would affect your resulting upgrade flow and process:

The dataSetup.enabled field controls whether the database setup job will run at all. This field should be set to true for a new installation of B2Bi/SFG or an upgrade to a new release and set to false when using helm upgrade to modify configuration without upgrading to a new version of code.

The dataSetup.upgrade field will generate the delta for schema changes and create database script files on the resources volume. This should be set to true for an upgrade.

The setupCfg.dbCreateSchema field will either enable or disable the creation or update of the database schema during the database setup. If set to false, the database administrator can review and either run the script files manually and then rerun a Helm upgrade with dataSetup.enbled: false or set setupCfg.dbCreateSchema: true for the changes to take effect via the database setup job. It is not recommended nor needed to set setupCfg.dbCreateSchema to false.

Configuring the database job is also described in the B2Bi/SFG documents. I will allow the database setup job to handle the process of updating the schema and database. So, my combination of the above values will be:

    • dataSetup.enabled: true
    • dataSetup.upgrade: true
    • setupCfg.dbCreateSchema: true

Configuring a B2Bi/SFG 6.2.0.3 Override File and Upgrading

Quiescing Traffic

During the upgrade process it is important to ensure that traffic in the B2Bi/SFG system is quiesced. This can be done several ways including stopping traffic to the system externally via the firewalls controlling traffic into the system, manually via the B2Bi/SFG "soft stop" option in the user interface, or allowing the Helm upgrade / termination grace period to gracefully stop the pod and its processes.

With a properly configured termination grace period, processes, BP cleanup, and other currently running workloads will be given a long enough period to stop gracefully before the pod is killed.

The exact setting for your environment will vary depending on the amount of traffic, number of BPs, etc.. It is better to set a longer grace period since if the container stops before the grace period is hit, there is no penalty. However, the risk of a very long grace period is if the container hangs on shutdown, it won't get killed until the grace period expires.

The default termination grace period in the B2Bi/SFG Helm chart is 30 seconds. I will change the value to 300 seconds to allow the pods enough time to gracefully shut down:

setupCfg.terminationGracePeriod: 300

Configuring Property Files

B2Bi/SFG property files such as system overrides, server overrides, and JVM options are in the Helm chart under the /config directory. If your current deployment has existing configuration files under /config and you are upgrading, refer to the Configuring Property Files - Upgrading section of the B2Bi/SFG documentation.

Note that the old system_overrides.properties file should not be directly copied into the upgrade chart's /config directory. Instead, you must update the required values from your previous configuration into the pre-existing system_overrides.properties file within the upgrade Helm chart. Both the server-overrides.xml and jvm.options files can be copied over.

Network Policies

B2Bi/SFG ships with a few out of the box network policies as per mandatory security guidelines. Below are the default policies for ingress and egress:

Ingress

        • Deny all ingress traffic.
        • Allow ingress traffic from all pods in the current namespace in the cluster.
        • Allow ingress traffic on the additional configured ports in Helm values.

Egress

        • Deny all egress traffic.
        • Allow egress traffic within the cluster.

If your deployment requires connection to and/or from locations not allowed from the default policies listed above (such as an external database), you can define custom global or resource specific network policies within your override.yaml file. Global network policies apply to all pods created by the Helm release while resource specific network policies are applied to the resource they are defined under (ASI, API, or AC statefulsets). 

In my configuration I will create an ingress and egress network policy to allow all traffic to/from my deployment. I'll do this by defining custom global network policies within my override.yaml file:

global:
  networkPolicies:
    ingress:
      customPolicies:
        - name: "ingress-allow-all"
    egress:
      customPolicies:
        - name: "egress-allow-all"

Note: global.networkPolicies.ingress and global.networkPolicies.egress both have an enabled flag used to enable / disable the creation of the out-of-the-box global network policies. I have not specified these values as their defaults are both true.

The above definitions in my override.yaml file will let Helm generate ingress and egress policies where the resulting policies have spec.ingress and spec.egress fields defined, but empty. This means that both the ingress and egress policies will allow all traffic from/to all sources.

Note: Network Policies this broad are not recommended for production deployments.

Configuring override.yaml

Note: For more information on configuring the B2Bi/SFG values, refer to the B2Bi/SFG documentation which has detailed descriptions of the values.yaml file, steps on configuring for performance, init container details, database job configuration, etc.. 

The last set of steps I need to do before upgrading my B2Bi/SFG installation is download the relevant Helm chart (can be found in the Resources section), create an override.yaml file to specify my configuration, and perform the upgrade.

With this approach, instead of directly modifying the Helm chart's values.yaml file, the changes made in the override.yaml file will be placed on top of the defaults in values.yaml. The benefits of doing this are:

      • A smaller and easier to maintain file containing only your specific configuration changes.
      • Upgrading to a new chart no longer requires diffing your current values.yaml with new values.yaml and moving over changes. Instead, you can reuse the override.yaml file and add/change the configuration as needed.
      • It is much easier to keep track of the values you have changed, and which values you have not. In other words, any value not specified in override.yaml will remain default.

First, I'll note the differences between the latest 6.1.2.6 values.yaml and 6.2.0.3 values.yaml files:

      • appLogsPVC now has an enabled value which can be set to true or false.
        • The default value is set to false.
      • The integrations section has two new subsections for itxIntegration and itxaIntegration.
        • Both itxIntegration.enabled and itxaIntegration.enabled are defaulted to false.
      • There is now a setupCfg.launchClaServer value for CLA configurations which can be set to true or false.
        • The default value is set to false.
      • asi.readinessProbe.initialDelaySeconds has gone from 60 seconds to 30.
      • asi.startupProve.initialDelaySeconds has gone from 120 seconds to 300.
      • asi.startupProbe.failureThreshold has gone from 3 to 6.
      • purge now has a subsection for internalAccess, with options for enabling HTTPS and specifying a TLS secret.
        • The default value for internalAccess.enableHttps is set to true.
      • There is now a documentService section. By default, documentService.enabled is set to false.

Apart from purge.internalAccess.enableHttps, which I will set to false in my configuration, none of the above changes will require modification in my configuration and will not be added to my override.yaml file.

Additionally, I will need to ensure that my namespace has an image pull secret for pulling the B2Bi/SFG images from my private repository. My cluster already has a pull secret (<Pull Secret>) for my private registry located in the default namespace. If you need to create a pull secret for your private registry, refer to the Kubernetes or OCP documentation on creating a pull secret. To create a copy of this secret in my B2Bi/SFG namespace (<B2B/SFG Namespace>), I run the following command:

kubectl get secret <Pull Secret> --namespace=default -oyaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=<B2B/SFG Namespace> -f -

With those differences in mind and my pull secret made, I will create an override.yaml file. In this file I will only specify the values I want to change from the defaults in the values.yaml file provided in the B2Bi/SFG Helm chart. I will ensure that each changed value in my current 6.1.2.6 installation (ingress hosts, server ports, database connection properties, etc.) matches those in the override.

While other changes may be required depending on your specific environment and configuration, this override.yaml provides a good template of what you commonly would need to customize for a basic deployment.

global 

The global section is used for accepting and choosing the license as well as the settings will apply globally to all Helm resources. The repository image and pull secret are used for all three B2Bi/SFG statefulsets (AC/API/ASI). Additionally, the pull secret will be applied to all empty *.pullSecret values in the chart such as dataSetup.image.pullSecret. Because the default for all *.pullSecret values in the chart is "", I will specify the pull secret here which will be applied to all other pull secret values in the chart. Finally, as previously mentioned, I will specify the digest as "" to ensure that the image tag is used instead. I'll also define my "allow all" ingress and egress policies in this section.

serviceAccount

This section has one value used to specify the service account name for the installation. The chart's default value is default.

resourcesInit

This section is for the init container for application resources such as the database driver JAR. Setting resourcesInit.enabled: true and appResourcesPVC.enabled: false will ensure that the init container is used for resources. I have changed the repository to match the repository / image from my private container registry and the digest to "" to enforce the used of the image tag.

persistence

The persistence section of the values.yaml file is used to specify storage access to persistent volumes which is enabled by default. The useDynamicProvisioning value enables the dynamic provisioning of persistent volumes. I will change this from false to true to allow K8s / OCP to handle provisioning the necessary Persistent Volumes.

appResourcesPVC

This section is used to control the options for the resources PVC. I have set enabled to false which means I intend to use the resources init container. However, it is important to note that appResourcesPVC.storageClassName, appResourcesPVC.accessMode, and appResourcesPVC.size all apply to the init container regardless of whether appResourcesPVC.enabled is set to true or false. By default, appResourcesPVC.storageClassName is "" (which enforces the use of the cluster's default storage class), appResourcesPVC.accessMode is ReadOnlyMany, and appResourcesPVC.size is 100Mi. I will use these defaults in my configuration, so they are not specified in my override.yaml file.

dataSetup

The values in the dataSetup section allow you to choose whether it is enabled, running for an upgrade, and the image settings. Different configuration options for the dataSetup values are mentioned in the previous section "A Note on B2Bi/SFG Database Upgrade Values".

env

This section specifies global environment variables. I have left all as default and changedupgradeCompatibilityVerified to true. This value must be set to true to allow for an upgrade.

setupCfg

The setupCfg section is used for various setup configuration values. Here I specify the system passphrase secret, database configuration / connection values, the termination grace period, admin email address / SMTP host, and option for using SSL for RMI.

asi / ac / api

The asi, ac, and api sections are used to specify configuration details for each of the B2Bi/SFG statefulsets such as additional environment variables, frontend/backend services, probes, internal/external access, ingress, PVCs, resources, etc.. For all three I will set the replicaCount to 2, disable internalAccess.enableHttps, give the ingress.internal.host addresses, and disable both internal and external ingress TLS. Additionally, for asi and ac I will give an adapter port for the backend service.

test

This section is used to specify the Helm test and cleanup docker image. I will change the image.repository to match my private repository image and the image.digest to "" to enforce using the tag.

purge

The purge section of the allows for specifying the external purge job and its configuration values. In my configuration I will enable it and change the image.repository and image.digest like how I changed them in the test section (ensuring I use the purge image from my private repository). I'll also set internalAccess.enableHttps to false. It is important to note that purge.schedule is empty by default in the chart, but is a required field. I have specified 0 0 * * * for the schedule which is every day at 12:00 AM.

My final override.yaml file looks as follows:

global:
  license: true
  licenseType: "non-prod"
  image:
    repository: "us.icr.io/b2bsfg/upgrade/b2bi"
    digest: ""
    pullSecret: "<Pull Secret>"
  networkPolicies:
    ingress:
      customPolicies:
        - name: "ingress-allow-all"
    egress:
      customPolicies:
        - name: "egress-allow-all"

serviceAccount:
  name: default 

resourcesInit:
  enabled: true
  image:
    repository: "us.icr.io/b2bsfg/upgrade/b2bi-resources"
    digest: ""

persistence:
  useDynamicProvisioning: true

appResourcesPVC:
  enabled: false

dataSetup:
  enabled: true
  upgrade: true
  image:
    repository: "us.icr.io/b2bsfg/upgrade/b2bi-dbsetup"
    digest: ""

env:
  upgradeCompatibilityVerified: true

setupCfg:
  systemPassphraseSecret: b2b-system-passphrase-secret
  dbVendor: db2
  dbHost: <DB2 LB Cluster IP>
  dbPort: 50000
  dbData: SFGDB
  dbDrivers: db2jcc4.jar
  dbSecret: b2b-db-secret
  dbCreateSchema: true
  adminEmailAddress: <Your Admin Email Address>
  smtpHost: localhost
  terminationGracePeriod: 300
  useSslForRmi: false

asi:
  replicaCount: 2
  internalAccess:
    enableHttps: false
  ingress:
    internal:
      host:  "asi.us-south.containers.appdomain.cloud"
      tls:
        enabled: false
    external:
      tls:
        enabled: false
  backendService:
    ports:
      - name: adapter-1
        port: 30201
        targetPort: 30201
        nodePort: 30201
        protocol: TCP

ac:
  replicaCount: 2
  internalAccess:
    enableHttps: false
  ingress:
    internal:
      host: "ac.us-south.containers.appdomain.cloud"
      tls:
        enabled: false
    external:
      tls:
        enabled: false
  backendService:
    ports: 
      - name: adapter-1
        port: 30201
        targetPort: 30201
        nodePort: 30201
        protocol: TCP

api:
  replicaCount: 2
  internalAccess:
    enableHttps: false
  ingress:
    internal:
      host: "api.us-south.containers.appdomain.cloud"
      tls:
        enabled: false

test:
  image:
    repository: 'us.icr.io/b2bsfg/upgrade/opencontent-common-utils'
    digest: ""

purge:
  enabled: true
  image:
    repository: "us.icr.io/b2bsfg/upgrade/b2bi-purge"
    digest: ""
  schedule: "0 0 * * *"
  internalAccess:
    enableHttps: false

Note that from the above values, the following are specifically set to ensure that I can perform an upgrade:

      • dataSetup.enabled: true
      • dataSetup.upgrade: true
      • env.upgradeCompatibilityVerified: true
      • setupCfg.dbCreateSchema: true

Where env.upgradeCompatibilityVerified must be set to true to allow for upgrades.

Additionally, I have specifically set all of the *.image.digest values to the empty string "" to enforce the Helm chart to use the image tags.

After the upgrade, I will set dataSetup.enabled and env.upgradeCompatibilityVerified back to false.

Also note that the ac.replicaCount, asi.replicaCount, and api.replicaCount values are all set to 2. This means that my deployment will remain at 2 nodes after the upgrade. If you are not doing a 2-node upgrade or if you do not want your resulting deployment to be 2-node, then you can omit the *.replicaCount: 2 values as the replica count for AC, ASI, and API statefulsets defaults to 1 (single-node) in the Helm chart.

After saving my override.yaml file, I will do a dry run of the Helm upgrade to verify that I have met the requirements for the Helm upgrade. Before doing so I'll ensure I am in my B2Bi/SFG namespace.

Note: The order of values.yaml and override.yaml in the following command matters. Helm takes precedence from right to left. In our case, any value specified in override.yaml will take priority over the same field in values.yaml.

helm upgrade --dry-run -f values.yaml -f override.yaml my-sfg-release --timeout=90m0s .

If my override file is not properly configured or if I am missing any value needed to upgrade my B2Bi/SFG instance, I will get an error message telling me what values are missing:

UPGRADE FAILED: values don't meet the specifications of the schema(s) in the following chart(s):

My override file is configured correctly so I don't see any error message. This means I am ready to perform the upgrade. So, I'll rerun the above command without the --dry-run flag:

helm upgrade --dry-run -f values.yaml -f override.yaml my-sfg-release --timeout=90m0s .

Verifying the Upgrade

The upgrade will take some time to complete as the database setup job will need to finish before the new pods can be brought up.

I can verify that my database setup job is connected to my database by first finding the name of the database setup pod. I'll run either of the following commands:

oc get pods | grep db

kubectl get pods | grep db

With the name of the db setup pod <DB Setup Pod> I run either of the following commands to see the logs:

oc logs -f <DB Setup Pod>

kubectl logs -f <DB Setup Pod>

I see 

Once the upgrade is done, I can verify my Helm upgrade was successful by running:

helm status my-sfg-release

And noting the STATUS field says deployed.

I can also check my ASI, AC, and API statefulsets by running either of the following commands:

oc get statefulsets | grep "my-sfg-release"

kubectl get statefulsets | grep "my-sfg-release"

Next to each of the three statefulsets I see 2/2 which indicates the pods for ASI, AC, and API are all running.

Finally, I will connect to the B2Bi/SFG user interface using the same steps outlined the SFG Installation - Verification subsection of my previous B2Bi/SFG installation blog. To get my dashboard route, I'll run either of the following commands:

oc get routes -o wide | grep dashboard

kubectl get routes -o wide | grep dashboard

Using the route will take me to the B2Bi/SFG dashboard login page indicating that the upgrade was successful:

While in the B2Bi/SFG user dashboard, I can also navigate to the Support button in the administration menu:

In this page I can see information regarding the installation version base I upgraded from (6.1.2.0), the upgraded version base that is currently running (6.2.0.0), and the fix pack version at the bottom of the screen. The fix 6020003 correlates to 6.2.0.3.

Resources

Helm Charts

IBM B2Bi Version 6.2.0.3

IBM SFG Version 6.2.0.3

Installation Document References

Installing B2Bi/SFG Using Certified Container

Configuring B2Bi/SFG Certified Container

DB2 Database Backup

Downloading Certified Container Images from IBM Entitled Registry

Configuring the B2Bi/SFG Database Job

Configuring B2Bi/SFG Property Files

Configuring B2Bi/SFG Network Policies

Upgrade B2Bi/SFG

Kubernetes: Creating a Pull Secret

OCP: Creating a Pull Secret

Acronyms

  • OCP: OpenShift Container Platform
  • B2Bi: Sterling Business to Business Integrator
  • SFG: Sterling File Gateway


#Featured-area-1-home

0 comments
21 views

Permalink