Managed File Transfer

 View Only

Run B2Bi and OpenShift on your Laptop Part 1 – Linux

By Brian Hall posted Fri September 03, 2021 04:21 PM

  

Run B2Bi and OpenShift on your Laptop Part 1 – Linux
by Brian Hall 
Collaborators: Carlos Chivardi, Adolfo Loustaunau

 

Containerization and hybrid cloud represent the future of enterprise workloads. Many businesses have already begun their journey to cloud, and those that haven’t are actively considering their approach. Red Hat OpenShift Container Platform provides a world-class operating environment for containerized software by handling the orchestration, management and monitoring of containerized workloads across distributed clusters of nodes. 

 

IBM Certified Containers running on OpenShift provide enormous benefits in your IBM Sterling B2B Integrator/File Gateway environment, such as easy rolling upgrades and dynamic scaling. Customers running the traditional install of Sterling B2Bi/SFG should be familiarizing themselves with containerization and gaining skills in that area. To that end, running a Sterling B2Bi/SFG environment in a development cluster is a great first step. However, access to a full OpenShift cluster for development may be a barrier to entry.

 

Red Hat CodeReady containers provide an easy path to a development system by combining the OpenShift master and worker nodes into a single image, and virtualizing that image using traditional hypervisors on Windows, MacOS and Linux. This allows a single, strong system to operate as an OpenShift cluster for development purposes, making the platform much more accessible for those without OpenShift infrastructure already in place.

 

In this three-part blog series, we will cover deploying a development system of Sterling B2Bi/SFG on OpenShift via Red Hat CodeReady Containers running on a single machine. Part 1 will cover deployment on a Linux host system. Part 2 will cover deployment on a Windows host system. Lastly, part 3 will cover deployment on a MacOS machine.

 

Note:  Sterling B2Bi/SFG certified container images are downloadable from the IBM Registry, using your entitled registry key, or via your PassportAdvantage account or via IBM FixCentral if you have the proper entitlement. If your organization does not have entitlements for Sterling B2Bi/SFG certified containers (they are separate from the traditional Sterling B2Bi/SFG licenses), contact your IBM sales representative to inquire about a trial license to deploy the containers before continuing.

 

The first step is to install CodeReady Containers (CRC) and its prerequisites. For the purposes of this blog, our target machine is a 4 core (8 thread) Thinkpad a485 with 32GB of RAM running RHEL 8.2. The machine needs to handle running CRC as well as a minimal Sterling B2Bi/SFG, so it should be as robust as you can find. If necessary, you can squeeze it into a machine with 16GB if you don’t enable the OpenShift monitoring operators at the end.

 

The first step is to install Docker:

 

Install Docker

 

$yum install docker device-mapper-libs device-mapper-event-libs
$systemctl start docker.service

$systemctl enable docker.service

To let your normal user run Docker commands:

$groupadd dockerroot
$usermod -aG dockerroot <userid>

 

Edit (or create) /etc/docker/daemon.json with contents:

{"live-restore": true,"group": "dockerroot"}

 

$systemctl restart docker

 

Now logout and back in and your regular user can run Docker commands. We’re now ready to install CodeReady Containers.

 

 

Download and Setup CRC for Linux

 

 

  • Download the pull secret by clicking the button on the page.

 

  • Note: Version 1.13 was used for this document, and I still recommend 1.13 even though there are more recent versions available as those versions have more features (and more overhead).  It is a tight squeeze as is with Sterling B2Bi/SFG, so 1.13 is the baseline but feel free to experiment with later versions.  I tested with both 1.21 as well as 1.25 and they worked, but with somewhat lower performance. 

 

  • Extract the file with tar -xf and move it somewhere with at least 35GB of space. In my case, I used /home/halljb/crc-linux-1.13.0-amd64.

 

  • Rename the folder to crc:

$mv /home/halljb/crc-linux-1.13.0-amd64 /home/halljb/crc

 

  • Add the folder to your path:

$export PATH=$PATH:/home/halljb/crc

 

  • Install NetworkManager

$su -c 'yum install NetworkManager'

 

Fix possible DNS issues on RHEL 8

 

CRC uses dnsmasq to handle its custom naming configuration. This worked fine on RHEL 7.7, but I encountered issues on RHEL 8.2. This may be due to the internal IBM image of RHEL, so check if it applies to your system. If pdnsd is active it will conflict with dnsmasq, making CRC non-functional. 

 

To correct this issue, run these commands to disable pdnsd:

$sudo systemctl stop pdnsd.service
$sudo systemctl disable pdnsd.service

 

You can read more about the CRC DNS setup here.

  

  • Run the CRC setup:

$crc setup

 

Storage will go in the ~/.crc folder, but you can use a symbolic link to another drive if you need the space.

 

  • Copy your pull-secret you downloaded to the crc folder and start CRC:

$crc start -p /home/halljb/crc/pull-secret

 

  • Once CRC starts, setup the shell environment to work with CRC:

$eval $(crc oc-env)

 

Note the credentials and commands that are mentioned at the end of startup. You will use them later:

 

To access the cluster, first set up your environment by following 'crc oc-env' instructions

INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'

INFO To login as an admin, run 'oc login -u kubeadmin -p hIXpp-RtRjL-QYi8W-nREf2 https://api.crc.testing:6443' (swap the bolded portion with your access key displayed in the startup message).

 

Create a DB2 Database for Sterling B2Bi

 

You can point to any existing DB2 server you have available (or Oracle/MSSQL), but I have found the most convenient way is to use the free DB2 Community Edition Docker container. It has plenty of specs for Sterling B2Bi, with the only limitation being it only supports one database at a time. This is convenient for swapping out various DB states by using a different folder. I will cover this more later.

 

DB2 docker image deployment

 

Documentation on DB2 Community Edition may be found here.

  • Download the DB2 container:

$docker pull ibmcom/db2

  • As root, create a location for DB storage:

$mkdir /data/B2BIDB

  • Run DB2 for the B2BI database:

$docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=passw0rd -e DBNAME=B2BI -v /data/B2BIDB:/database ibmcom/db2

  • Make note of the db2 password, database name, and database location in the Docker command line (bolded). Swap them with the values you want, but the DB2 user is hardwired as db2inst1.

Subsequent runs (after a reboot etc) can be done with normal Docker commands such as:   

$docker start mydb2

  • If you need to run DB2 commands you can login to the instance with:

$docker exec -ti mydb2 bash -c "su - db2inst1"   

 

Download JDBC 4 driver for DB2 11.5

 

Open this page and download the JDBC 4 driver for DB2 11.5. Extract the driver to the location you want:

$tar -xvzf v11.5.4_jdbc_sqlj.tar.gz

$cd jdbc_sqlj/

$unzip db2_db2driver_for_jdbc_sqlj.zip

 

Deploy a local Docker registry if you don't have one

$docker run -d -p 5000:5000 --restart=always --name registry registry:2

 

Additional reference documentation is available here 

 

Configure CRC Resources for the Sterling B2Bi Container Image

 

CRC defaults to 4 vcpu and 8GB ram. 16GB is enough to run a Sterling B2Bi API pod and a couple ASI pods (barely). This environment is tight, so I did not run any AC pods or purge etc. You can change the config of CRC with:

$crc config set cpus 8

$crc config set memory 16384

 

If you plan to enable monitoring below use (assuming you have enough RAM to support this):

$crc config set memory 24576

 

If you make configuration changes, you must delete the current container with "crc delete". Then the new container will have the new values once it is re-created with "crc start...". Later versions of CRC only require a stop/start vs a delete/start.

 

Start CRC with your pull secret you downloaded:

$crc start -p ../pull-secret

 

Login to ocp as kubeadmin (swap with your credentials in the output from crc start):

$oc login -u kubeadmin -p hIXpp-RtRjL-QYi8W-nREf2 https://api.crc.testing:6443

 

Install Helm v3   

Open the CRC console with:

$crc console

 

Login as kubeadmin. Click the ? icon in the top right and select Command Line Tools. Click "Download Helm" and pull helm-linux-amd64. Install Helm with: 

$chmod 755 helm-linux-amd64   

$sudo mv helm-linux-amd64 /usr/local/bin/helm   

$helm –help

 

Create a Sterling B2Bi project

 

The Docker image is called "b2bi", so we will use "b2bgateway" to make things clearer.

 

Change to your crc folder and create a project:

$cd /home/halljb/crc   

$mkdir b2bgateway

$cd b2bgateway   

$oc new-project b2bgateway

 

Load the Sterling B2Bi/SFG image

 

To follow the steps in this blog you can obtain the Sterling B2Bi/SFG certified container image through your PassportAdvantage account or via FixCentral. If your organization does not have entitlements for Sterling B2Bi/SFG certified containers (they are separate from the traditional Sterling B2Bi/SFG licenses), contact your IBM sales representative to inquire about a trial license to deploy the containers. Documentation on the ways to download the containers are available here

 

Extract the 6.1 downloaded image into your b2bgateway folder:   

$tar -xvf STER_B2B_INT_CERT_CONT_V6.1_ML.tar b2bi-6.1.0.0.tar

 

Extract the 6.1 Helm charts into your b2bgateway folder:   

$tar -xvf STER_B2B_INT_CERT_CONT_V6.1_ML.tar ibm-b2bi-prod*

 

Extract the Helm charts:   

$tar -xvzf ibm-b2bi-prod-2.0.0.tgz

 

Load Sterling B2Bi Docker image into the local Docker repo:

$docker load -i b2bi-6.1.0.0.tar

 

Afterwards, "docker images" will show an image "b2bi" with tag 6.1.0.0.

Tag the docker image in the local docker repository and push (swap your ip as needed):   

$docker tag b2bi:6.1.0.0 192.168.1.104:5000/myb2bi

$docker push 192.168.1.104:5000/myb2bi --tls-verify=False

 

The image will now be myb2bi:latest in the local registry.

 

Login to the crc master node with:   

$oc get nodes

 

Notice the node name returned and swap below as needed:   

$oc debug node/crc-nsk8x-master-0$chroot /host

 

Pull the Sterling B2Bi Docker image from the local docker repository into CRC (swap the IP address of your workstation as needed):   

$podman pull 192.168.1.104:5000/myb2bi:latest --tls-verify=False   

 

The image in the container will now be known as 192.168.1.104:5000/myb2bi:latest, but CRC will want it to be known by its internal name:

$podman tag 192.168.1.104:5000/myb2bi image-registry.openshift-image-registry.svc:5000/b2bgateway/myb2bi

 

Create Persistent volumes

 

Create the local folders on the CRC master node:   

$oc get nodes (notice the node name returned and swap below as needed)   
$oc debug node/crc-nsk8x-master-0
$cd /host/mnt/pv-data   
$mkdir b2bi   
$mkdir b2bi/logs   
$mkdir b2bi/resources   
$mkdir b2bi/documents   
$chmod 777 b2bi/logs   
$chmod 777 b2bi/resources   
$chmod 777 b2bi/documents

Copy the DB2 driver jar from the remote system to the resources folder (swapping your IP, user and where you left the JDBC files):

$chroot /host   
$cd /mnt/pv-data/b2bi/resources   
$scp halljb@192.168.1.104:/data/nfsdata/b2bi/resources/db2jcc4.jar .   
$exit    (the chroot)
$exit    (the master node)$chroot /host   

Create the following yaml files in your b2bgateway project folder. The file name is in parenthesis, and the contents in italics (indentation is important):

(documents-pv-local.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: documents-pv-local
  labels:
    intent: documents
spec:
  storageClassName: manual
  capacity:
    storage: 1000Mi         
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /mnt/pv-data/b2bi/documents(documents-pv-local.yaml)

(resources-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: resources-pv-local
  labels:
    intent: resources
spec:
  storageClassName: manual
  capacity:
    storage: 500Mi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /mnt/pv-data/b2bi/resources (resources-pv.yaml)

(logs-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: logs-pv-local
  labels:
    intent: logs
spec:
  storageClassName: manual
  capacity:
    storage: 1000Mi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /mnt/pv-data/b2bi/logs

Apply the yaml files you created:

$oc create -f logs-pv-local.yaml
$oc create -f resources-pv-local.yaml  
$oc create -f documents-pv-local.yaml    

 

Create B2Bi Secrets  

Create the following yaml files in your b2bgateway project folder. The file name is in parens, and the contents in italics (indentation is important):

(b2b-passphrase.yaml)

apiVersion: v1
kind: Secret
metadata:
  name: b2b-system-passphrase-secret
type: Opaque
stringData:
  SYSTEM_PASSPHRASE: passw0rd

 
(b2b-dbsecret.yaml)

apiVersion: v1
kind: Secret

metadata:
    name: b2b-db-secret
type: Opaque
stringData:
    DB_USER: db2inst1
    DB_PASSWORD: passw0rd
#    DB_TRUSTSTORE_PASSWORD: password
#    DB_KEYSTORE_PASSWORD: password


(b2b-libertysecret.yaml)

apiVersion: v1
kind: Secret

metadata:
    name: b2b-liberty-secret
type: Opaque
stringData:
#    LIBERTY_KEYSTORE_PASSWORD: password


Apply the secrets with:

$oc create -f b2b-passphrase.yaml
$oc create -f b2b-dbsecret.yaml
$oc create -f b2b-libertysecret.yaml

 

Configure Security Constraints   

 

Copy the "oc" command to "kubectl" since these scripts expect that to be available (oc is a superset of kubectl):

$cp ~/.crc/bin/oc/oc ~/.crc/bin/oc/kubectl   
$cd <b2bgateway project>/ibm-b2bi-prod/ibm_cloud_pak/pak_extensions/pre-
install/clusterAdministration
$chmod 755 createSecurityClusterPrereqs.sh   
$./createSecurityClusterPrereqs.sh

 

Note:  If you get a bad interpreter error that mentions "^M", you may need to convert the scripts to Unix newlines.  Use this command to fix them if necessary:

$find ~/crc/b2bgateway/ibm-b2bi-prod -name *.sh -print0 | xargs -0 dos2unix


$cd ../namespaceAdministration   
$chmod 755 createSecurityNamespacePrereqs.sh
$./createSecurityNamespacePrereqs.sh b2bgateway

 

Create values_override.yaml 

 

This file will hold any non-default configurations to be applied by the Helm chart. The example specified below will configure a single ASI pod and a single API pod for a minimal environment while creating a new DB table. Refer here for documentation on the various values available in the Helm chart.

 Swap in your workstation IP in the DB2 section (or other database details if you chose a different path) and any other settings that may differ for you.

 
$cd <b2bgateway project dir>

 
(values_override.yaml)
global:
  image:
    repository: "image-registry.openshift-image-registry.svc:5000/b2bgateway/myb2bi"
  # Provide the tag value in double quotes
    tag: "latest"
    pullPolicy: IfNotPresent
    pullSecret: ""

appResourcesPVC:
  name: resources
  storageClassName:
  selector:
    label: "intent"
    value: "resources"
  accessMode: ReadOnlyMany
  size: 250Mi    pullSecret: ""

appLogsPVC:
  name: logs
  storageClassName:
  selector:
    label: "intent"
    value: "logs"
  accessMode: ReadWriteMany
  size: 750Mi

 

appDocumentsPVC:
  enabled: false
  name: documents
  storageClassName:
  selector:
    label: "intent"
    value: "documents"
  accessMode: ReadWriteMany
  size: 750Mi

 

ingress:
  enabled: false



dataSetup:
  enabled: true 
  upgrade: false  enabled: false

 

#env:
#  upgradeCompatibilityVerified: true

logs:
  # true if user wish to redirect the application logs to console else false. If provided value is true, then application logs will reside inside containers.
No volume mapping will be used.
  enableAppLogOnConsole: false


setupCfg:
  #Upgrade
  #upgrade: false
  basePort: 30000
  licenseAcceptEnableSfg: true
  licenseAcceptEnableEbics: false
  licenseAcceptEnableFinancialServices: false
  systemPassphraseSecret: b2b-system-passphrase-secret
  enableFipsMode: false
  nistComplianceMode: "off"

# Provide the DB attributes
  dbVendor: DB2
  dbHost: 192.168.1.104
  dbPort: 50000
  dbData: B2BI
  dbDrivers: db2jcc4.jar
  dbCreateSchema: true
  oracleUseServiceName: false
  # Values can be either true or false
  usessl: false
  # Required when usessl is true
  dbTruststore:
  dbKeystore:
  # Name of DB secret
  dbSecret: b2b-db-secret
  #Provide the admin email address
  adminEmailAddress: halljb@us.ibm.com
  # Provide the SMTP host details 
  smtpHost: localhost  nistComplianceMode: "off"

asi:
  replicaCount: 1
  frontendService:
    type: NodePort
  backendService:
    type: NodePort
    ports:
      - name: asi-ftp-1
        port: 30032
        targetPort: 30032
        nodePort: 32021
        protocol: TCP
      - name: asi-sftp-1
        port: 30039
        targetPort: 30039
        nodePort: 32022
        protocol: TCP
    portRanges:
      - name: adapters
        portRange: 30350-30400
        targetPortRange: 30350-30400
        nodePortRange: 30350-30400
        protocol: TCP
  resources:
    limits:
      cpu: 4000m
      memory: 8Gi
    requests:
      cpu: 1000m
      memory: 4Gi
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 90
ac:
  replicaCount: 0
api:
  replicaCount: 1
  frontendService:
    type: NodePort
  ports:
    http:
      name: http
      port: 35005
      targetPort: http
      nodePort: 30005
      protocol: TCP
    https:
      name: https
      port: 35006
      targetPort: https
      nodePort: 30006
      protocol: TCP
  limits:
    cpu: 4000m
    memory: 4Gi
  requests:
    cpu: 1000m
    memory: 2Gi
dashboard:
  enabled: true
purge:
  enabled: false  nistComplianceMode: "off"


These values will give you an ASI pod, an API pod, no AC pods and a disabled autoscaler for ASI that will scale to 2 at 90%. The autoscaler requires monitoring operators to be enabled below, so consider it if you have the resources.

 

Deploy B2Bi/SFG with the Helm chart

 

Now that our Helm override values are in place, we can run the Helm chart which will deploy the containers to the cluster.

 

Note:  If you had a previous failed install attempt, the storageVolumes may show "released" state. You will need to remove and recreate those volumes before running the chart again. Delete the storageVolumes via OCP web console (Storage section) and recreate them from the yaml files as you did originally. This usually needs to be done with logs and documents, but not resources.

 

Perform the Helm install from the b2bgateway folder:

$helm install <my release name> -f ./values_override.yaml ./ibm-b2bi-prod --timeout 90m0s --namespace b2bgateway

 

If the deployment fails for any reason, delete it with: 

$helm delete <my release name>

 You will also need to re-create two storage volumes (see the note above) before subsequent install attempts.

 

Get the pods listing:

$oc get pods -l release=<my release name> -n b2bgateway -o wide

 

To view the logs for a pod, run the below command:

$oc logs <pod name> -n b2bgateway

If performing DB operations (a fresh install or an upgrade), it will probably take 45-60 minutes. Once DB config is done, Helm will display some information on how to access B2Bi.

 
Get the external IP address of the CRC VM:

$crc ip (for me this was 192.168.130.11)

Access b2b console with:

http://192.168.130.11:30000/dashboard/Login

 

Liberty server at:

http://192.168.130.11:30005/ 
http://192.168.130.11:30005/B2BAPIs/svc/

Liberty server ssl at:

https://192.168.130.11:30006/

 

If the database create/upgrade is disabled (you have a current DB already), the API pod will

start within approximately 3 minutes, and the ASI pod within approximately 6 minutes.

 

Enabling Monitoring Operators in CRC

 

As CRC is designed to run in confined spaces, some of the OpenShift monitoring operators are disabled by default to improve performance. Enabling them will allow things like autoscale to work, and will provide CPU graphs etc., but it is very resource intensive. With monitoring enabled, my CRC environment consumes over 50% of all CPUs with no workloads running.

 

Some useful references on enabling monitoring operators in CRC:

Reference 1  

Reference 2 

 

To enable monitoring, run this command:

$oc get clusterversion version -ojsonpath='{range .spec.overrides[*]}{.name}{"\n"}{end}' | nl -v 0

 

Note the number next to "cluster-monitoring-operator" in the output (it was 0 for me). Run this command, replacing the monitoring operator index from the last step: 

 
$oc patch clusterversion/version --type="json" -p "[{'op':'remove', 'path':'/spec/overrides/<unmanaged-operator-index>'}]" -oyaml   

For example, my patched command was: 

$oc patch clusterversion/version --type="json" -p "[{'op':'remove', 'path':'/spec/overrides/0'}]" -oyaml

Archiving and Restoring your B2Bi DB with the DB2 container

 

Once your Sterling B2Bi has deployed, you probably will do some customization (for example, creating a SFTP adapter to use port 30039 that was specified in the yaml). Perhaps you would like to archive that state for use later as a baseline environment, or perhaps you want to exercise upgrading from a previous version of Sterling B2Bi DB. Being able to swap out the database state quickly and easily can be handy in these situations, especially as the container deployment takes so little time outside of DB activities.

 

I have found the simplest way to do this is to leverage the DB2 docker container pointing to a specific folder on the filesystem. You can stop Sterling B2Bi and DB2, then make a tarball of that folder for safe keeping. You can then extract that tarball to another folder and use that as your target DB without changing anything else. This is a very convenient way to quickly restore a particular state of Sterling B2Bi.

 

The following is the process to take a deployed Sterling B2Bi DB and clone it to another folder for a different deployment.

 

Create the archive

 

Delete Sterling B2Bi with: 

$helm delete <my release>

or

Stop Sterling B2Bi with: 

$crc stop

 

Stop and remove the DB2 container:

$docker stop mydb2
$docker rm mydb2

 
$cd B2BIDB

Note:  sudo is key here...the users and groups must be retained, or do this as root

 
$sudo tar -cvzf ../b2bidb.tgz *

Extract the archive to a new folder:

 
$mkdir B2BIDB2
$cd B2BIDB2
$sudo tar -xvzf ../b2bidb.tgz

 

Create the DB2 container pointing at the new (blue) folder:

$docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=passw0rd -e DBNAME=B2BI -v /data/B2BIDB2:/database ibmcom/db2

This allows you to substitute a different destination folder and content while retaining the DB name of "B2BI" that the deployment expects. 

 

If your DB target contains the correct version Sterling B2Bi tables already, make sure you set "dataSetup: enabled: false" in values_override.yaml. If your Sterling B2Bi DB is backlevel, set "dataSetup: enabled: true upgrade: true" in values_override.yaml.

 

Summary

 

In this blog we walked through deploying a barebones Sterling B2Bi Certified Containers development environment with CRC on a Linux system. This should serve as an excellent sandbox for building skills on Sterling B2Bi containers, and on OpenShift itself. In addition, these skills are directly applicable to deploying Sterling B2Bi on a full OpenShift cluster either on premise or on any cloud platform.

 

In the next installment of the series we will cover deploying Sterling B2Bi Certified Containers with CodeReady Containers on a Windows environment.

 

Reference of CRC commands

crc setup

crc stop

crc start

crc status

crc console  (opens the web browser interface)

crc ip (displays the IP address of the cluster vm)

 

Appendix

 
1. Minimum Requirements for IBM Sterling B2Bi


https://www.ibm.com/support/pages/detailed-hardware-and-software-requirements-sterling-b2b-integrator-v526-or-later-and-global-mailbox-and-sterling-file-gateway-v226-or-later

2. CodeReady Containers requires the following system resources:

  • 4 virtual CPUs (vCPUs)
  • 9 GB of free memory.
  • 35 GB of storage space

https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/1.0/html/getting_started_guide/getting-started-with-codeready-containers_gsg

 

Other resources:

https://dzone.com/articles/an-introduction-to-red-hat-openshift-codeready-con

https://www.dataversity.net/openshift-vs-kubernetes-the-seven-most-critical-differences/


To read Run B2Bi and OpenShift on your Laptop Part 2 – Windows click here


#DataExchange
#IBMSterlingB2BIntegratorandIBMSterlingFileGatewayDevelopers
1 comment
161 views

Permalink

Comments

Tue March 07, 2023 04:49 PM

Thanks, this was very helpful. I had to make a few changes to get it working with:

  • RHEL v9.1 (as a VM)
  • CRC v2.14

The first change was to most of the YAML files, as due to formatting on the new community site they were not valid YAML, and just needed to be cleaned up.

The second is that due to new security policies, only a privileged pod can access the hostPath PVs. Thus since I was running on RHEL it was rather simple to add the NFS server config to the box. The I created an /etc/exports that looked a bit like:
/home/halljb/NFS  192.168.0.0/16(rw,no_root_squash)

and then copied the JDBC driver into /home/halljb/NFS/resources/

Due to this change, I had to update all of the PV yaml configs to not use hostPath but NFS. You can find examples in ibm-b2bi-prod/ibm_cloud_pak/pak_extensions/pre-install/volume/ 
Here is a sample for the resousrces-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: resources-pv-local
  labels:
    intent: resources
spec:
  storageClassName: crc-csi-hostpath-provisioner
  capacity:
    storage: 500Mi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
#  hostPath:
#    path: /mnt/pv-data/b2bi/resources
  nfs:
    server: 192.168.1.104
    path: /home/halljb/NFS/resources

Another issue I faced was that the firewalld was blocking access to the DB2 & regestry instance from the ASI POD, so you need to add ports 5000 and 50000 or as I am running it as a VM just disable it ;)

As my RHEL was headless I needed to access the UI from a different machine and I followed this article to setup a reverse proxy with haproxy: 
https://cloud.redhat.com/blog/accessing-codeready-containers-on-a-remote-server/
however as that only gives access to the CRC dashboards I had to add the following for B2BI

frontend b2bsi
  bind 192.168.142.130:30000
  option tcplog
  mode tcp
  default_backend b2bsi

backend b2bsi
  mode tcp
  balance roundrobin
  server webserver1 192.168.130.11:30000 check

frontend liberty
  bind 192.168.142.130:30005
  option tcplog
  mode tcp
  default_backend liberty

backend liberty
  mode tcp
  balance roundrobin
  server webserver1 192.168.130.11:30005 check

frontend liberty-ssl
  bind 192.168.142.130:30006
  option tcplog
  mode tcp
  default_backend liberty

backend liberty-ssl
  mode tcp
  balance roundrobin
  option ssl-hello-chk
  server webserver1 192.168.130.11:30006 check

Thanks again for your article it was a huge help!