Sterling Managed File Transfer

 View Only

Run B2Bi and OpenShift on your Laptop Part 2 – Windows

By Brian Hall posted Tue January 25, 2022 05:24 PM

  

Contributors: Adolfo Loustaunau and Carlos Chivardi

To read Run B2Bi and OpenShift on your Laptop Part 1 – Linux click here

Containerization and hybrid cloud represent the future of enterprise workloads. Many businesses have already begun their journey to cloud, and those that haven’t are actively considering their approach. Red Hat OpenShift Container Platform provides a world-class operating environment for containerized software by handling the orchestration, management and monitoring of containerized workloads across distributed clusters of nodes. 

 

IBM Certified Containers running on OpenShift provide enormous benefits in your IBM Sterling B2B Integrator/File Gateway environment, such as easy rolling upgrades and dynamic scaling. Customers running the traditional install of Sterling B2Bi/SFG should be familiarizing themselves with containerization and gaining skills in that area. To that end, running a Sterling B2Bi/SFG environment in a development cluster is a great first step. However, access to a full OpenShift cluster for development may be a barrier to entry.

 

Red Hat CodeReady containers provide an easy path to a development system by combining the OpenShift master and worker nodes into a single image, and virtualizing that image using traditional hypervisors on Windows, MacOS and Linux. This allows a single, strong system to operate as an OpenShift cluster for development purposes, making the platform much more accessible for those without OpenShift infrastructure already in place.

 

In this second installment of our three-part blog series, we will cover deploying a development System of Sterling B2Bi/SFG on OpenShift via Red Hat CodeReady Containers running on a single Windows machine.  We will also utilize the DB2 Operator to deploy DB2 inside the same cluster. 

 

Note:  Sterling B2Bi/SFG certified container images are downloadable from the IBM Registry, using your entitled registry key, or via your PassportAdvantage account or via IBM FixCentral if you have the proper entitlement. If your organization does not have entitlements for Sterling B2Bi/SFG certified containers (they are separate from the traditional Sterling B2Bi/SFG licenses), contact your IBM sales representative to inquire about a trial license to deploy the containers before continuing.

 

The first step is to install CodeReady Containers (CRC) and its prerequisites. For the purposes of this blog, our target machine is an 8 core (16 thread) AMD Ryzen 3900x with 64GB of RAM running Windows 10. The machine needs to handle running CRC as well as a minimal Sterling B2Bi/SFG, deployment, so it should be as robust as you can find. If necessary, you can squeeze it into a machine with 4 cores and 16GB if you don’t enable the OpenShift monitoring operators at the end and elect to use a database outside of the cluster.

 

The first step is to install CodeReady Containers:

 

Download and Setup CRC for Windows

 

  • Open https://cloud.redhat.com/openshift/install/crc/installer-provisioned. You will need to create a Red Hat account if you don't have one. Select the Windows platform and click the download button.  This will pull the latest version of CRC.

    • Note: Version 1.28 was used for this document.  We leverage some newer features for expanding the virtual disk and operators.  Enhanced performance may be attained on a tight system by dropping down to 1.13 but some steps may differ.  We strongly recommend obtaining version 1.28, specifically, by downloading it from here as the installation steps differ quite a bit with 1.32+. 

 

 

  • Download the pull secret by clicking the button on the page.

 

 

  • Extract the crc-windows-amd64.zip file to c:\.  Our path was  c:\crc-windows-1.28.0-amd64.

 

  • Rename the folder to crc:

C:\move c:\crc-windows-1.28.0-amd64 c:\crc

 

  • Add the folder to your path:

Open Control Panel->System and click "Advanced system settings"

Click "Environment Variables"

Select "Path" in the User variables section and click edit.

Click New and add an entry for "c:\crc"

Click OK to exit

 

  • Run the CRC setup:

C:\crc\>crc setup

Click yes to allow administrative changes.

You may need to reboot if your user was just added to the Hyper-V group when the command ran.  In that case, reboot the system, and run crc setup again.

 

  • Note: Storage will go in the c:\users\<user>.crc folder.  This will take 35GB when you create the VM.  If you need a location other than your user's home folder, you can move it with a Windows Junction (like a symbolic link on Unix).      Documentation here:  https://docs.microsoft.com/en-us/sysinternals/downloads/junction

 

  • Copy your pull-secret you downloaded to the crc folder and start CRC:

C:\crc\>crc start -p c:\crc\pull-secret

Answer yes to any administrator prompts.  The first startup may take awhile.

 

  • Once CRC starts, setup the shell environment to work with CRC:

C:\crc\>@FOR /f "tokens=*" %i IN ('crc oc-env') DO @call %i

 

Note the credentials and commands that are mentioned at the end of startup. You will use them later:

 

To access the cluster, first set up your environment by following the instructions returned by executing 'crc oc-env'.

 

Setup shell environment to work with CRC:

@FOR /f "tokens=*" %i IN ('crc oc-env') DO @call %i

 

Then you can access your cluster by running 'oc login -u developer -p developer https://api.crc.testing:6443'.

To login as a cluster admin, run 'oc login -u kubeadmin -p v24F5-arnmP-IxW4S-U9FP7 https://api.crc.testing:6443'.

 

Configure CRC for the B2Bi/SFG container image

 

  • CRC defaults to 4 vcpu and 8GB ram. Adjust based on your hardware capabilities:

C:\crc\>crc config set cpus 10

C:\crc\>crc config set memory 40000

(if you enable monitoring below use:  crc config set memory 24576 or more)

 

  • Optional Enable monitoring:

C:\crc\>crc config set enable-cluster-monitoring true

  • Stop the container

C:\crc\>crc stop

 

Resize the disk (reference: https://github.com/code-ready/crc/issues/127)

 

CRC ships with a 35GB disk image.  This is fine for base function, but in this article, we will be deploying DB2 inside the cluster, so we need to stretch the space available first.  The resize command on Windows is a PowerShell cmdlet of the form:

Resize-VHD -Path "$global:homePath\.crc\cache\$crcFileName\crc.vhdx" -SizeBytes ($DiskSizeGB * 1024 * 1024 * 1024 )

 

  • To resize the base image to 60 GB (this is the image CRC copies when you run setup):

C:\>powershell

PS C:\>Resize-VHD -Path "c:\Users\brian\.crc\cache\crc_hyperv_4.7.11\crc.vhdx" -SizeBytes ( 60 * 1024 * 1024 * 1024 )

 

  • Resize the current image to 60GB (our currently configured instantiation of crc)

C:\>powershell

PS C:\>Resize-VHD -Path "c:\Users\brian\.crc\machines\crc\crc.vhdx" -SizeBytes ( 60 * 1024 * 1024 * 1024 )

 

  • Start CRC with our config changes and finish the disk resize to 60GB

C:\crc\>crc start -p .\pull-secret" --disk-size 60

 

When you start CRC, if you get a warning message like this:  "WARN Failed public DNS query from the cluster: ssh command error:...", use this start command instead to correct it (you will be prompted for Administrator permissions): 

C:\crc\>crc start -p .\pull-secret --disk-size 60 --nameserver 1.1.1.1

 

 

Create a DB2 Database for Sterling B2Bi

 

You can point to any existing DB2 server you have available (or Oracle/MSSQL), including the free DB2 community edition docker container (on Linux) as described in article one of this series. 

 

For this article, we will be utilizing the DB2 Operator which is available from the IBM entitled registry.  Here is some reference material these instructions were drawn from:

 

Enable the IBM Operator Catalog

  • Login to OCP console with "crc console" and login as kubeadmin
  • Click Operators->OperatorHub
  • Click the + icon in the top right of the console and select "import yaml"
  • Paste this yaml:

apiVersion: operators.coreos.com/v1alpha1

kind: CatalogSource

metadata:

  name: ibm-operator-catalog

  namespace: openshift-marketplace

spec:

  displayName: "IBM Operator Catalog"

  publisher: IBM

  sourceType: grpc

  image: docker.io/ibmcom/ibm-operator-catalog

  updateStrategy:

    registryPoll:

      interval: 45m 

  • Click Create. Wait on the resulting screen until the "Status" item says READY
  • Click back on OperatorHub and enter "IBM DB2" in the search box. You should see the IBM DB2 operator in the results.

 

Prepare Persistent Volumes for Db2

  • Run “oc get nodes” (replace the node name in the next command it differs)
  • oc debug nodes/crc-pkjt4-master-0
    Note: To automate this replacement, use this command instead: oc debug node/$(oc get nodes | grep crc | cut -d “ “ -f 1)

 

  • Enter these commands:

$chroot /host

$sudo setsebool -P container_manage_cgroup true

$chmod 777 -R /mnt/pv-data

$sudo chgrp root -R /mnt/pv-data

$sudo semanage fcontext -a -t container_file_t "/mnt/pv-data(/.*)?"

$sudo restorecon -Rv /mnt/pv-data

Note: You may receive a rejection on the setsebool command — this is okay. The above simply makes sure there are no barriers to Db2 using the persistent volume (PV) directories.

 

  • exit (the chroot). Don't exit the debug node yet as the next steps need a bash shell.
  • Login to the cluster from the debug pod (using the correct key from your startup result):

$oc login -u kubeadmin -p v24F5-arnmP-IxW4S-U9FP7 https://api.crc.testing:6443

  • Set pv policy to Retain with:

$mylist=$(oc get pv | grep pv | grep Available | cut -d " " -f1)

$for i in $mylist ; do  oc patch pv $i -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ; done

  • exit (the debug pod)

 

Create a db2-dev folder and project:

C:\mkdir c:\crc\db2-dev

C:\cd c:\crc\db2-dev

C:\crc\db2-dev\>oc new-project db2-dev

 

Configure the DB2 pull secret:

  • Login with IBMID (apply here: https://myibm.ibm.com/dashboard/)
  • Scroll down to software and click on the Container Software Library
  • Select Get Entitlement Key, and copy the key
  • Create the file c:\crc\db2-dev\setdb2secret.bat with this content:

REM Set the variables to the correct values

REM Use cp for the value of the docker-username field

REM

set ENTITLEDKEY="<Entitlement Key from MyIBM> "

set EMAIL="<email ID associated with your IBM ID>"

set NAMESPACE="<project or Namespace Name for Db2 project>"

 

oc create secret docker-registry ibm-registry ^

 --docker-server=cp.icr.io ^

 --docker-username=cp ^

 --docker-password=%ENTITLEDKEY% ^

 --docker-email=%EMAIL% ^

 --namespace=%NAMESPACE%

***this line should be blank, leaving for formatting***

 

  • Make sure you replace the values and include a blank line at the end. The namespace to use is “db2-dev”. 
  • Run

C:\crc\db2-dev\>setdb2secret.bat

 

Install the DB2 operator:

  • Login to OCP console with "crc console" and login as kubeadmin
  • Click Operators->OperatorHub
  • Filter by "DB2" and click IBM Db2
  • Click Install
  • Select db2-dev as the namespace. Take the defaults elsewhere.
  • Click Install
  • When finished, click View Operator to view documentation and description.

 

Configure the DB2 instance:

Reference documentation: https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.db2u_openshift.doc/doc/c_db2ucluster_api.html)

 

  • Create the file c:\crc\db2\dev\db2config.yaml and enter this content:

apiVersion: db2u.databases.ibm.com/v1

kind: Db2uCluster

metadata:

  name: db2ucluster-db2dev1

spec:

  account:

    imagePullSecrets:

     - ibm-registry

    privileged: true

  environment:

    dbType: db2oltp

    database:

      name: B2BI

    instance:

      password: passw0rd

    ldap:

      enabled: false

  podConfig:

    db2u:

      resource:

        db2u:

          limits:

            cpu: '3.0'

            memory: 8Gi

          requests:

            cpu: '0.5'

            memory: 2Gi

  addOns:

    rest:

     enabled: false

    graph:

     enabled: false

 

  size: 1

  version: 11.5.6.0

  storage:

    - name: meta

      type: "create"

      spec:

        accessModes:

          - ReadWriteMany

        resources:

          requests:

            storage: 10Gi

    - name: data

      type: "create"

      spec:

        accessModes:

          - ReadWriteMany

        resources:

          requests:

            storage: 10Gi

 

You may need to update the version number to match the current Operator. 

  • To see what version the operator is using, run:

C:\crc\db2-dev\>oc describe deployments db2u-operator-manager > temp.txt

  • Search the resulting file for "version" (it will be the 3rd occurrence)
  • Ensure you are in the db2-dev project with:

C:\crc\db2-dev\>oc project db2-dev

  • Deploy DB2 with the command:

C:\crc\db2-dev\>oc create -f db2config.yaml

 

You can check deploy status with:  “oc get pods”, or in the web console at Home->Projects->db2-dev.  Wait for the "morph" pod to show "completed".

 

Identify the service port for B2Bi to connect to:

  • In the CRC Console, navigate to Home->Projects->db2-dev and click the Services

For connecting from outside the cluster (for reference):

  • Click the service called c-db2ucluster-db2dev1-db2u-engn-svc and note the Node port location that maps to 50000. In my case it was 32206.
  • Run 'crc ip' to determine the cluster ip address. For me:  20.154.62.

 

Therefore, the cluster address for an external application to connect to will be 172.20.154.62:32206

       

For connecting from inside the cluster (B2Bi will do this):

  • Click the service called c-db2ucluster-db2dev1-db2u and note the Cluster IP that maps to service port 50000. In my case it was 10.217.4.127.


Therefore, the internal cluster address for B2Bi to connect to will be 10.217.4.127:50000

 

Deleting DB2 (if you need to start over for any reason): 

  • Determine which PV Claims exist for the db2-dev project by clicking on Home->Projects and select db2-dev.
  • Click PersistentVolumes and note which volumes are in use by db2-dev (for example, pv0021 and pv0016)
  • Go back to the db2-dev project and, from the actions menu, click delete project and enter the name (db2-dev) to confirm.
  • To clean up the PVs:

Swap out values in the subsequent commands as needed:

 

$oc login -u kubeadmin -p v24F5-arnmP-IxW4S-U9FP7 https://api.crc.testing:6443 

 

$debug nodes/crc-pkjt4-master-0

$oc login -u kubeadmin -p v24F5-arnmP-IxW4S-U9FP7 https://api.crc.testing:6443

$cd /host/mnt/pv-data

Enter the PV folders you noted db2 was using and delete the files (carefully):

$cd pv0021

$rm -Rf *

$cd pv0016

$rm -Rf *

        Clear the "Released" flag from those PVs by removing the Claim 

$oc patch pv pv0021 -p '{"spec":{"claimRef": null}}'

$oc patch pv pv0016 -p '{"spec":{"claimRef": null}}'

        Exit the debug pod: 

$exit

       

At this point DB2 is removed and you can go back to the "Create a db2-dev folder and project" section and redeploy DB2.

 

Installing the B2Bi 6.1 Certified Container

Create a B2Bi Project

The Docker image is called "b2bi", so we will use "b2bgateway" to make things clearer.

 

Change to your crc folder and create a project:

C:\>cd crc   

C:\crc\>mkdir b2bgateway

C:\crc\>cd b2bgateway   

C:\crc\b2bgateway\>oc new-project b2bgateway

 

Configure the B2Bi Pull Secret

Reference:  https://www.ibm.com/docs/en/b2b-integrator/6.1.0?topic=artifacts-downloading-certified-container-images-from-entitled-registry

 

  • Login with your IBMID (apply here: https://myibm.ibm.com/dashboard/)
  • Scroll down to software and click on the Container Software Library
  • Select Get Entitlement Key, and copy the key
  • Create the file c:\crc\b2bgateway\setb2bsecret.bat with this content. Make sure you replace the values and include the blank line at the end:

 

REM Set the variables to the correct values

REM Use cp for the value of the docker-username field

REM

set ENTITLEDKEY="<Entitlement Key from MyIBM> "

set EMAIL="<email ID associated with your IBM ID>"

set NAMESPACE="b2bgateway"

 

oc create secret docker-registry ibm-registry ^

 --docker-server=cp.icr.io ^

 --docker-username=cp ^

 --docker-password=%ENTITLEDKEY% ^

 --docker-email=%EMAIL% ^

 --namespace=%NAMESPACE%

<this line needs to be blank, but exist in the file>

 

  • Run the command setb2bsecret.bat

 

Install Helm v3:

  • Open the CRC console with crc console and login as kubeadmin
  • Click the ? icon in the top right and select Command Line Tools
  • Click Download Helm and download helm-windows-amd64.exe
  • Save it as c:\crc\helm.exe
  • Run

c:\crc\helm.exe --help

 

Download JDBC 4 driver for DB2 11.5

 

  • Open this page and download the JDBC 4 driver for DB2 11.5.5+. For me it was v11.5.5_jdbc_sqlj.tar.gz.
  • Use a tool like 7-Zip to extract multiple levels of archive to get to db2_db2driver_for_jdbc_sqlj.zip.
  • Extract the file db2jcc4.jar to c:\crc

 

Create the B2Bi Persistent Volumes

 

Create the local folders on the CRC master node:   

C:\crc\b2bgateway\>oc get nodes (notice the node name returned and swap below as needed)   

C:\crc\b2bgateway\>oc debug node/crc-nsk8x-master-0

$cd /host/mnt/pv-data   

$mkdir b2bi   

$mkdir b2bi/logs   

$mkdir b2bi/resources   

$mkdir b2bi/documents   

$chmod 777 b2bi/logs   

$chmod 777 b2bi/resources   

$chmod 777 b2bi/documents

 

Copy the DB2 driver jar from the remote system to the resources folder

 

On a Windows system, you will need to be creative.  The simplest approach is to put the file on a remote ftp, web, or ssh server and use ftp, curl, or scp commands to pull it from within the debug pod that is currently open. 

 

Note:  CRC has a file sync facility, but it requires cwRsync/cygwin to be present on the Windows machine.  The sync approach is described here:  https://docs.openshift.com/container-platform/4.1/nodes/containers/nodes-containers-copying-files.html

 

We chose to put the file on a remote web server and pull it to the CRC node with curl:

$chroot /host

$cd /mnt/pv-data/b2bi/resources

$curl http://192.168.1.2:8081/upload/db2jcc4.jar -o db2jcc4.jar

$exit (the chroot)

$exit (the debug pod)

 

Create B2Bi Configuration Files

 

Create the following yaml files in your b2bgateway project folder. The file name is in parenthesis, and the contents in italics (indentation is important):

(documents-pv-local.yaml)

apiVersion: v1

kind: PersistentVolume

metadata:

  name: documents-pv-local

  labels:

    intent: documents

spec:

  storageClassName: manual

  capacity:

    storage: 1000Mi         

  accessModes:

    - ReadWriteMany

  persistentVolumeReclaimPolicy: Retain

  hostPath:

    path: /mnt/pv-data/b2bi/documents

 

(resources-pv.yaml)

 

apiVersion: v1

kind: PersistentVolume

metadata:

  name: resources-pv-local

  labels:

    intent: resources

spec:

  storageClassName: manual

  capacity:

    storage: 500Mi

  accessModes:

    - ReadOnlyMany

  persistentVolumeReclaimPolicy: Retain

  hostPath:

    path: /mnt/pv-data/b2bi/resources

 

(logs-pv.yaml)

 

apiVersion: v1

kind: PersistentVolume

metadata:

  name: logs-pv-local

  labels:

    intent: logs

spec:

  storageClassName: manual

  capacity:

    storage: 1000Mi

  accessModes:

    - ReadWriteMany

  persistentVolumeReclaimPolicy: Retain

  hostPath:

    path: /mnt/pv-data/b2bi/logs

 

Apply the yaml files you created:

C:\crc\b2bgateway\>oc create -f logs-pv-local.yaml

C:\crc\b2bgateway\>oc create -f resources-pv-local.yaml  

C:\crc\b2bgateway\>oc create -f documents-pv-local.yaml    

 

Create B2Bi Secrets  

 

Create the following yaml files in your b2bgateway project folder. The file name is in parens, and the contents in italics (indentation is important):

 

(b2b-passphrase.yaml)

 

apiVersion: v1

kind: Secret

metadata:

    name: b2b-system-passphrase-secret

type: Opaque

stringData:

    SYSTEM_PASSPHRASE: passw0rd

 

(b2b-dbsecret.yaml)

 

apiVersion: v1

kind: Secret

metadata:

    name: b2b-db-secret

type: Opaque

stringData:

    DB_USER: db2inst1

    DB_PASSWORD: passw0rd

#    DB_TRUSTSTORE_PASSWORD: password

#    DB_KEYSTORE_PASSWORD: password

 

(b2b-libertysecret.yaml)

 

apiVersion: v1

kind: Secret

metadata:

    name: b2b-liberty-secret

type: Opaque

stringData:

    LIBERTY_KEYSTORE_PASSWORD: passw0rd

 

Apply the secrets with:

C:\crc\b2bgateway\>oc create -f b2b-passphrase.yaml

C:\crc\b2bgateway\>oc create -f b2b-dbsecret.yaml

C:\crc\b2bgateway\>oc create -f b2b-libertysecret.yaml

 

Download the B2Bi Helm chart

 

Reference: https://www.ibm.com/docs/en/b2b-integrator/6.1.0?topic=dcca-downloading-certified-container-helm-charts-from-chart-repository

 

 

Configure Security Constraints   

 

The helm chart ships with Bash scripts to configure the security constraints.  Unless you have a Bash shell configured on your Windows system, you will need to run them from inside a debug pod.

 

C:\crc\b2bgateway\>oc debug nodes/crc-pkjt4-master-0

$oc login -u kubeadmin -p v24F5-arnmP-IxW4S-U9FP7 https://api.crc.testing:6443

$cd /tmp

$curl https://github.com/IBM/charts/blob/master/repo/ibm-helm/ibm-b2bi-prod-2.0.2.tgz -o ibm-b2bi-prod-2.0.2.tgz

$tar -xvzf ibm-b2bi-prod-2.0.2.tgz

 

Optional (more recent versions of crc do not require this):  Copy the oc command to kubectl since these scripts expect that to be available.

 

Continue running the security scripts:

$cd /tmp/ibm-b2bi-prod/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration

$chmod 755 createSecurityClusterPrereqs.sh

$./createSecurityClusterPrereqs.sh

$cd ../namespaceAdministration

$chmod 755 createSecurityNamespacePrereqs.sh

$./createSecurityNamespacePrereqs.sh b2bgateway

$exit (the debug pod)

 

Create B2Bi chart configuration overrides

 

This file will hold any non-default configurations to be applied by the Helm chart. The example specified below will configure a single ASI pod and a single API pod for a minimal environment while creating a new DB table. Refer here for documentation on the various values available in the Helm chart.

 

Pay special attention to the DB2 section and use the service values you pulled out previously. 

 

Create  c:\b2bgateway\values_override.yaml with this content:

global:

  image:

    repository: "cp.icr.io/cp/ibm-b2bi/b2bi"

  # repository: "image-registry.openshift-image-registry.svc:5000/b2bgateway/myb2bi"

  # Provide the tag value in double quotes

    tag: "6.1.0.2"

    #tag: "latest"

    pullPolicy: IfNotPresent

    pullSecret: "ibm-registry"

    #pullSecret: ""

 

appResourcesPVC:

  name: resources

  storageClassName: "manual"

  selector:

    label: "intent"

    value: "resources"

  accessMode: ReadOnlyMany

  size: 250Mi

 

 

appLogsPVC:

  name: logs

  storageClassName: "manual"

  selector:

    label: "intent"

    value: "logs"

  accessMode: ReadWriteMany

  size: 750Mi

 

appDocumentsPVC:

  enabled: false

  name: documents

  storageClassName: "manual"

  selector:

    label: "intent"

    value: "documents"

  accessMode: ReadWriteMany

  size: 750Mi

 

ingress:

  enabled: false

 

dataSetup:

  enabled: true 

  upgrade: false

 

#env:

#  upgradeCompatibilityVerified: true

 

logs:

  # true if user wish to redirect the application logs to console else false. If provided value is true , then application logs will reside inside containers. No volume mapping will be used.

  enableAppLogOnConsole: false

 

setupCfg:

  #Upgrade

  #upgrade: false

  basePort: 30000

  licenseAcceptEnableSfg: true

  licenseAcceptEnableEbics: false

  licenseAcceptEnableFinancialServices: false

  systemPassphraseSecret: b2b-system-passphrase-secret

  enableFipsMode: false

  nistComplianceMode: "off"

 

  # Provide the DB attributes

  dbVendor: DB2

  dbHost: 10.217.4.127

  dbPort: 50000

  dbData: B2BI

  dbDrivers: db2jcc4.jar

  dbCreateSchema: true

  oracleUseServiceName: false

  # Values can be either true or false

  usessl: false

  # Required when usessl is true

  dbTruststore:

  dbKeystore:

  # Name of DB secret

  dbSecret: b2b-db-secret

 

  #Provide the admin email address

  adminEmailAddress: halljb@us.ibm.com

  # Provide the SMTP host details 

  smtpHost: localhost

 

asi:

  replicaCount: 1

 

  frontendService:

    type: NodePort

  backendService:

    type: NodePort

    ports:

      - name: asi-ftp-1

        port: 30032

        targetPort: 30032

        nodePort: 32021

        protocol: TCP

      - name: asi-sftp-1

        port: 30039

        targetPort: 30039

        nodePort: 32022

        protocol: TCP

 

    portRanges:

      - name: adapters

        portRange: 30350-30400

        targetPortRange: 30350-30400

        nodePortRange: 30350-30400

        protocol: TCP

 

  resources:

    limits:

      cpu: 4000m

      memory: 8Gi

    requests:

      cpu: 1000m

      memory: 4Gi

 

  autoscaling:

    enabled: false

    minReplicas: 1

    maxReplicas: 2

    targetCPUUtilizationPercentage: 60

 

ac:

  replicaCount: 0

 

api:

  replicaCount: 1

  frontendService:

    type: NodePort

  ports:

    http:

      name: http

      port: 35005

      targetPort: http

      nodePort: 30005

      protocol: TCP

    https:

      name: https

      port: 35006

      targetPort: https

      nodePort: 30006

      protocol: TCP

  limits:

    cpu: 4000m

    memory: 4Gi

  requests:

    cpu: 1000m

    memory: 2Gi

 

 

dashboard:

  enabled: true

 

purge:

  enabled: false

 

These values will give you an ASI pod, an API pod, no AC pods and a disabled autoscaler for ASI that will scale to 2 at 60%. The autoscaler requires monitoring operators to be enabled, so consider it if you have the resources.

 

Deploy B2Bi/SFG with the Helm chart

 

Now that our Helm override values are in place, we can run the Helm chart which will deploy the containers to the cluster.

 

Note:  If you had a previous failed install attempt, the storageVolumes may show "released" state. You will need to remove and recreate those volumes before running the chart again. Delete the storageVolumes via OCP web console (Storage section) and recreate them from the yaml files as you did originally. This usually needs to be done with logs and documents, but not resources.  If dbsetup is true, give it 2 hours (database setup alone took 1:24 on my system with DB2 inside the cluster).

 

Perform the Helm install from the b2bgateway folder (pick a release name and substitute):

C:\crc\b2bgateway\>helm install <my release name> -f .\values_override.yaml .\ibm-b2bi-prod --timeout 120m0s --namespace b2bgateway

 

If the deployment fails for any reason, delete it with:   

C:\crc\b2bgateway\>helm delete <my release name>

 

You will also need to re-create two storage volumes (see the note above) before subsequent install attempts.

 

Get the pods listing:

C:\crc\b2bgateway\>oc get pods -l release=<my release name> -n b2bgateway -o wide

 

To view the logs for a pod, run the below command:

C:\crc\b2bgateway\>oc logs <pod name> -n b2bgateway

 

If performing DB operations (a fresh install or an upgrade), it will probably take 90 minutes to complete. Once DB config is done, Helm will display some information on how to access B2Bi.

 

export NODE_IP=$(kubectl get nodes --namespace b2bgateway -o jsonpath="{.items[0].status.addresses[0].address}")

  export NODE_PORT=$(kubectl get --namespace b2bgateway -o jsonpath="{.spec.ports[1].nodePort}" services jbhb2bi-61-ibm-b2bi-prod)

  echo "dashboard: https://$NODE_IP:$NODE_PORT/dashboard"

  echo "filegateway : https://$NODE_IP:$NODE_PORT/filegateway"

 

Access the B2Bi Administration interface

 

Get the external IP address of the CRC VM:

C:\crc\b2bgateway\>crc ip (for me this was 192.168.130.11)

 

Access b2b console with:

http://<IP>:30000/dashboard/Login

 

Liberty server at:

http://<IP>:30005/   

http://<IP>:30005/B2BAPIs/svc/   

 

Liberty server ssl at:

https://<IP>:30006/

 

If the database create/upgrade is disabled (you have a current DB already), the API pod will

start within approximately 3 minutes, and the ASI pod within approximately 6 minutes.

 

Making Configuration Updates

If you make port changes etc to values_override.yaml, you can apply the update with:

helm upgrade -f values_override.yaml <my release name> .\ibm-b2bi-prod

 

If the deployment fails, can use these commands to remove it (swap your release name):

C:\crc\b2bgateway\>helm delete jbhb2bi-61

C:\crc\b2bgateway\>oc delete jbhb2bi-61-b2bi-resources-pvc

C:\crc\b2bgateway\>oc delete jbhb2bi-61-b2bi-logs-pvc

C:\crc\b2bgateway\>oc delete jbhb2bi-61-b2bi-config

 

Summary

 

In this blog we walked through deploying a barebones Sterling B2Bi Certified Containers development environment with CRC on a Windows system. This should serve as an excellent sandbox for building skills on Sterling B2Bi containers, and on OpenShift itself. In addition, these skills are directly applicable to deploying Sterling B2Bi on a full OpenShift cluster either on premise or on any cloud platform.

 

In the next installment of the series we will cover deploying Sterling B2Bi Certified Containers with CodeReady Containers on a MacOS environment.

 

Reference of CRC commands

crc setup

crc stop

crc start

crc status

crc console  (opens the web browser interface)

crc ip (displays the IP address of the cluster vm)

 

Appendix

 

 

  1. Minimum Requirements for IBM Sterling B2Bi

https://www.ibm.com/support/pages/detailed-hardware-and-software-requirements-sterling-b2b-integrator-v526-or-later-and-global-mailbox-and-sterling-file-gateway-v226-or-later

 

  1. CodeReady Containers requires the following system resources:
  • 4 virtual CPUs (vCPUs)
  • 9 GB of free memory.
  • 35 GB of storage space

 

https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/1.0/html/getting_started_guide/getting-started-with-codeready-containers_gsg

 

 

 

 

  • Other resources:

 

 

https://dzone.com/articles/an-introduction-to-red-hat-openshift-codeready-con

 

https://www.dataversity.net/openshift-vs-kubernetes-the-seven-most-critical-differences/


#Featured-area-2


#Featured-area-2-home




#IBMSterlingB2BIntegratorandIBMSterlingFileGatewayDevelopers
#DataExchange
#Highlights-home
#Highlights
#Featured-area-2
#Featured-area-2-home
4 comments
1198 views

Permalink

Comments

Mon August 29, 2022 02:26 PM

Hi Pierre!  I think you will need to add a route to the GUI as Rob had to do in the previous comment.  Look at your project for the service names and then the command should be something similar to:  oc expose svc b2bi6104-b2bi-asi-frontend-svc route.route.openshift.io/b2bi6104-b2bi-asi-frontend-svc exposed.

Fri August 19, 2022 05:01 AM

Hi Brian,

Thank you very much for this giude, I was able to successfully deploy the PODS, and everything looks fine. Problem I am facing is that I cannot access the GUi of either SFG or B2Bi. I am not a CRC guru, so am I lost at why I cannot get to the GUI following your guide. Could you kindly assist?

Regards
Pierre

Tue March 29, 2022 05:06 AM

my own experience has been tough, having tried the release mentioned here I was unable to resolve the DNS warning even after adding a nameserver etc. However, main issue was unable to access either the dashboard or filegateway urls. My own ip having issued the crc ip command was always coming back as 127.0.0.1 and any attempt to access on 127.0.0.1:30000/dashboard or filegateway returned unable to resolve host. Finally, took the decision to add a route to openshift (the helm chart doesn't appear to provision one of these but I'm looking into that). I exposed my b2bgateway cluster service with the oc svc export command to create a route which then provided access !

Wed March 23, 2022 03:31 PM

A couple of changes since publishing you will need to incorporate:
-In the "Install the DB2 operator" section make sure you pick the latest version of the stream (v1.1 at the moment). Also, you will need to update your db2config.yaml with a license section like this:
...
spec:
   license:
      accept: true
   account:
...

-  In the "Download the B2Bi Helm chart" section, some redirects have been added that confuse curl.  You need to either add the -L option to handle redirects, or use the new url: https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm/ibm-b2bi-prod-2.0.2.tgz