Sterling B2B Integration

Sterling B2B Integration

Come for answers, stay for best practices. All we're missing is you.

 View Only

Installing IBM Sterling File Gateway on Minikube Using Certified Containers

By Connor McGoey posted Mon September 16, 2024 10:08 AM

  

Installing IBM Sterling File Gateway on Minikube Using Certified Containers

Table of Contents

Introductory Notes

Minikube Install

DB2 Installation

SFG Installation

Resources

Introductory Notes

Note: This blog and its deployments are only to be used for a development environment. This blog and its instructions do not at all support production environments.

Intent

The purpose of this blog is to deploy a bare-bones deployment of IBM Sterling File Gateway (SFG) on a minimal Minikube cluster. Because IBM SFG requires a database, this blog will cover deploying an IBM DB2 database in the Minikube cluster as well.

This blog focuses on a small development environment of IBM SFG deployed on a single Virtual Machine or local machine with few resources using Minikube. If you are looking for a more fully featured deployment of SFG on OpenShift, see my previous blog Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container

Presumptions

Prior to following these installation steps, you should ensure that you have met all prerequisite steps outlined in my previous blog including, but not limited to:

Environment

The machine I will be installing Minikube on is running AMD64 bit Linux. The docker engine is running and the Kubernetes command line tool kubectl is installed and available as well. I also have root access to the machine.

Minikube Installation

About Minikube

Minikube is a Kubernetes distribution designed to work on local Linux, Windows, or macOS systems. While its purpose is not for production deployments or hosting applications in the cloud, it supports the latest K8s version and 6 previous minor versions, multiple container runtimes, and multiple deployment options.

Prerequisites

Prior to installing Minikube, note that the IBM SFG and DB2 installations were installed on a Minikube cluster having:

      • 4 CPUs
      • 8 GB Memory
      • 300 GB of Disk Space (~40GB Disk Space should suffice)

I have also met the other two prerequisites outlined on the Minikube installation page:

      • Internet connection
      • Container manager (Docker)

Installing Minikube

To install Minikube on my machine I select the appropriate release (Linux, ARM64) from the "Minikube Start" page and use the provided commands to install:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64
sudo install minikube-linux-arm64 /usr/local/bin/minikube && rm minikube-linux-arm64

Creating a Minikube Cluster

By this point, I have the Helm, Minikube, the Docker engine, and have access to a repository containing the IBM Sterling File Gateway images. 

I'll run the following command to start the Minikube cluster with 4 CPUs and 8GB Memory:

minikube start --force --cpus 4 --memory 8g

If it does not recognize you are using the Docker engine, you can specify the driver using the --driver option:

minikube start --force --cpus 4 --memory 8g --driver docker

Once started I can run the following command to verify the status of my cluster:

minikube status

DB2 Installation

Installation Prerequisites

These installation steps assume that your cluster environment has access to pull the image for DB2.

Namespace and Service Account

I'll first create a namespace for my database:

kubectl create namespace db2

Then, I will create a service account to use for my DB2 installation, ensuring I specify the namespace for the service account:

kubectl create serviceaccount db2sa -n db2

Database Install and Setup

To install DB2 in my namespace, I will create a new file named db2.yaml and put the following YAML definition in it. I will also ensure I have replaced <Your DB2 Password> with my desired database password:

apiVersion: v1
kind: Service
metadata:
  name: db2-lb
spec:
  selector:
    app: db2
type: ClusterIP
  ports:
  - protocol: TCP
    port: 50000
    targetPort: 50000
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: db2
spec:
  selector:
    matchLabels:
      app: db2
  serviceName: "db2"
  replicas: 1
  template:
    metadata:
      labels:
        app: db2
    spec:
    serviceAccount: db2sa
      containers:
      - name: db2
        securityContext:
          privileged: true
        image: ibmcom/db2:11.5.5.1
        env:
        - name: LICENSE 
          value: accept 
        - name: DB2INSTANCE 
          value: db2inst1 
        - name: DB2INST1_PASSWORD 
        value: <Your DB2 Password>      
        ports:
        - containerPort: 50000
          name: db2
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /database
          name: db2vol
  volumeClaimTemplates:
  - metadata:
      name: db2vol
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
        storage: 20Gi

Once I have saved the file, I will create the database and its service by running:

kubectl create -f db2.yaml -n db2

After waiting a few minutes for the DB2 pod to start, I can open a shell into the pod and switch to the db2inst1 user by running:

kubectl exec --stdin --tty <DB2 POD> -- /bin/bash -n db2

su - db2inst1

Logged in as the db2inst1 user, I create a file which will make my SFG database. I'll name this file createDB.sql. In the file I'll put:

CREATE DATABASE SFGDB AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768;

To finish setting up my database I will save this file and run the following command as the db2inst1 user:

db2 -stvf createDB.sql

While my database is being created, I will take note of the following information which I will need later to install SFG:

      • Vendor of the database (dbType): db2
      • Cluster IP address of the load balancer (dbHostIp): <DB2 LB Cluster IP> (can be obtained by running kubectl get services -n db2)
      • Port for the database load balancer (dbPort): 50000
      • Database User (databaseName): db2inst1
      • Database Name (dbData): SFGDB
      • Database Password: <DB2 Password>

SFG Installation

Namespace and Role Based Access Control (RBAC)

I'll begin the SFG installation by creating a namespace:

kubectl create namespace sfg

Note that I will be using the default service account for SFG.

I'll then create the role and rolebinding required for the installation. These can be found in the SFG Helm chart's README file. I'll place the following in a new file called sfgRBAC.yaml:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-b2bi-role-sfg
namespace: sfg
rules:
- apiGroups: ['route.openshift.io']
resources: ['routes','routes/custom-host']
verbs: ['get', 'watch', 'list', 'patch', 'update']
- apiGroups: ['','batch']
resources: ['secrets','configmaps','persistentvolumes','persistentvolumeclaims','pods','services','cronjobs','jobs']
verbs: ['create', 'get', 'list', 'delete', 'patch', 'update']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-b2bi-rolebinding-sfg
namespace: sfg
subjects:
- kind: ServiceAccount
name: default
namespace: sfg
roleRef:
kind: Role
name: ibm-b2bi-role-sfg
apiGroup: rbac.authorization.k8s.io
To create the role and rolebinding, I will run the following command with the saved file:
kubectl apply -f sfgRBAC.yaml -n sfg

Creating SFG Secrets

Two secrets are required to install SFG at a minimal level. The first secret is the secret containing your system passphrase:

apiVersion: v1
kind: Secret
metadata:
  name: b2b-system-passphrase-secret
type: Opaque
stringData:
SYSTEM_PASSPHRASE: <Your System Passphrase>

The second is the database secret which is used to store the authentication information for the database instance:

apiVersion: v1
kind: Secret
metadata:
  name: b2b-db-secret
type: Opaque
stringData:
DB_USER: db2inst1
DB_PASSWORD: <DB2 Password>

Both of the above definitions can be placed in a single file separated by ---. I am going to name this file sfgSecrets.yaml. To create the secrets, I run:

kubectl create -f sfgSecrets.yaml -n sfg

Creating an Image Pull Secret

I will need an image pull secret to access the SFG images in the repository. I am using the IBM Cloud Container Registry (ICR) for which I have an access key for. To create the imagePullSecret with my credentials, I will run the following command:

kubectl create secret docker-registry icr-pull-secret --docker-username="cp" --docker-password="<Entitled registry API key>" --docker-email="<Your Email>" --docker-server="cp.icr.io" -n sfg

I can check the pull secret was successfully created by running:

kubectl get secrets

Configuring override.yaml

I'll now create a copy of the SFG Helm chart's values.yaml file and name this new copy override.yaml. I will then change the following values to meet my specification.

ac.replicaCount: 0
 
api.ingress.internal.host: 'api-server.local'
api.ingress.internal.tls.enabled: false
api.internalAccess.enableHttps: false
api.livenessProbe.initialDelaySeconds: 1200
api.readinessProbe.initialDelaySeconds: 1200
api.resources.limits.cpu: 2000m
api.resources.limits.memory: 4Gi
api.resources.requests.cpu: 500m
api.resources.requests.memory: 1500Mi
 
appResourcesPVC.enabled: false
 
asi.ingress.external.tls.enabled: false
asi.ingress.internal.host 'asi-server.local'
asi.ingress.internal.tls.enabled: false
asi.internalAccess.enableHttps: false
asi.livenessProbe.initialDelaySeconds: 1200
asi.readinessProbe.initialDelaySeconds: 1200
asi.resources.limits.cpu: 2000m
asi.resources.limits.memory: 4Gi
asi.resources.requests.cpu: 500m
asi.resources.requests.memory: 1500Mi
asi.startupProbe.initialDelaySeconds: 1200
 
global.license: true
global.image.pullSecret: 'icr-pull-secret'
 
persistence.useDynamicProvisioning: true
 
purge.internalAccess.enableHttps: false
purge.schedule: 0 0 * * *
 
resourcesInit.enabled: true
setupCfg.adminEmailAddress: <Your Admin Email Address>
setupCfg.dbData: SFGDB
setupCfg.dbDrivers: db2jcc4.jar
setupCfg.dbHost: <DB2 LB Cluster IP>
setupCfg.dbPort: 50000
setupCfg.dbSecret: b2b-db-secret
setupCfg.dbVendor: db2
setupCfg.smtpHost: localhost
setupCfg.systemPassphraseSecret: b2b-system-passphrase-secret
setupCfg.useSslForRmi: false
Note that the following modifications were made specifically to specifically suite my small Minikube environment:
      • The AC pod is an optional component and has been disabled in this configuration as I won't be running any adapters, and the goal is to keep the overall deployment light weight.
        • You can still setup adapters as normal via the B2Bi user interface and they will run from the main ASI pod.
      • The resources for the remaining two pods have been scaled down:
        • Requests are 500m CPU and 1500Mi memory.
        • Limits are 2000m CPU and 4Gi memory. This is because the database setup job and other setup tasks will require more resources than when the pods are in a running/ready state. 
      • Due to resources being limited and the fact that the startup process will take longer, each probe has been given a 1200 second (20 minute) delay to allow more time for pods to startup before probes are run.

Note: the database setup job pulls its resources from the ASI pod's resources subsection of the Helm chart and requires 2000m CPU and 4Gi memory. The raised memory limit accounts for this.

I will save the above changes in my override.yaml file and remember this file's location.

Helm Installation

I am now ready to use my override.yaml file to install IBM SFG. I will run the following command to begin installing SFG with the release name my-sfg-release in the sfg namespace with a timeout of 120 minutes to ensure the install has ample time to run. I also specify Chart version 3.0.1 which is the chart containing SFG version 6.2.0.1.

helm install my-sfg-release <IBM Helm Chart Repository>/ibm-sfg-prod --version 3.0.1 -f override.yaml --timeout=120m0s -n sfg

Validating Installation

After the database setup job is complete, the ASI and API pods will start running. After 20 minutes, the probes will run to check for liveness, readiness, and startup. So, I will run the below command after 20 minutes to check their statuses:

kubectl get pods -n sfg

I can see in the output:

Resources

My Previous Blog on Installing SFG on OCP (With More Detail)

Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container

Helm Charts

IBM Helm Chart Repository

SFG Version 6.2.0.1

Installation Document References

Minikube Installation and Startup

Installing Sterling SFG / B2B Integrator using Certified Containers

0 comments
57 views

Permalink