Sterling B2B Integration

Sterling B2B Integration

Come for answers, stay for best practices. All we're missing is you.

 View Only

Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container

By Connor McGoey posted Wed May 01, 2024 12:22 PM

  

Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container

Table of Contents

Introductory Notes

Helm

DB2 Installation

ITXA Installation

SFG Installation

ITX Integration

ITXA Integration

Resources

Acronyms

Troubleshooting

Introductory Notes

Products

IBM Sterling File Gateway (SFG) is a file transfer consolidation system that provides scalability and security. SFG can intelligently monitor, administer, route and transform high volumes of inbound and outbound files. 

IBM Transformation Extender (ITX) allows for any-to-any data transformation by automating transactions and validation of data between formats and standards.

IBM Transformation Extender Advanced (ITXA) provides additional support for and metadata for mapping, compliance checking and related processing functions for specific industries. Additionally, ITXA provides a user interface to interact with.

Intent

The purpose of this blog is to provide non-production details on how to deploy Sterling File Gateway and IBM Transformation Extender Advanced, and how to integrate ITX and ITXA with the SFG deployment. If your deployments need specific information not covered in this blog, refer to the documents for SFG and ITXA for more detailed information with regards to your deployments' needs.

Note that while SFG can be deployed without ITX and/or ITXA integration and ITXA can be deployed without SFG, this blog specifically covers integrating the three products in a containerized deployment.

This is intended as an example of one possible configuration for SFG and ITX/ITXA.

Proposed Deployment

What will be deployed is as follows:

    • An IBM Database 2 (DB2) v11.5.5.1 instance with two databases ITXADB and SFGDB running in it. With it, a load balancer to provide connection via cluster IP to the DB2 instance and a persistent volume for storage (Figure 3). 
    • A SFG v6.2.0.1 instance with ASI, AC, and API pods. The ASI pod will run ITX 10.1.2.0.20231130 and ITXA init 10.0.1.8-x86_64. It will be connected to the SFGDB database (Figure 1).
    • An ITXA v10.0.1.8-x86_64 instance running the ITXA UI Server Pod and ITX. It will be connected to the ITXADB database (Figure 2).
    • Two ITX persistent volumes connected to the SFG instance for logs and data.
    • A shared logs persistent volume connected to both the SFG and ITXA instances. This, like other the volumes for SFG and DB2. will be provisioned dynamically.

Presumptions

Prior to following the installation steps in this blog, it is important to note that the environment and resulting deployments should not be used to replicate and/or produce a production environment for SFG, ITX, and/or ITXA. Additionally, a few presumptions are made with regards to these installations and their steps:

      • These installations create a set of deployments that are not air-gapped. They disable certain security measures such as TLS, SSL, and HTTPS which should be used for production configurations. For steps on enabling these settings, refer to the documents for necessary secrets and configurations.

        In SFG, the following values are changed in its YAML file:
ac.ingress.external.tls.enabled: false
ac.ingress.internal.tls.enabled: false
api.ingress.internal.tls.enabled: false
api.internalAccess.enableHttps: false
asi.ingress.external.tls.enabled: false
asi.ingress.internal.tls.enabled: false
asi.internalAccess.enableHttps: false
purge.internalAccess.enableHttps: false
setupCfg.useSslForRmi: false

In ITXA, the following values are changed in its YAML file:

itxauiserver.ingress.ssl.enabled: false

Additionally, for ITXA integration, the following values are changed in its YAML file:

integrations.itxaIntegration.sso.ssl.enabled: false



      • These instructions were developed on an OpenShift cluster running in the IBM Cloud. However, kubectl commands have also been provided and the instructions should work in Kubernetes as well.
      • The Helm releases pull images for the deployments from the IBM container registry, for which the environment is already configured with required permissions and entitlement. Steps for configuring your development environment to pull the necessary images are referenced in the prerequisites for both SFG and ITXA.
      • The database used is DB2 version is 11.5.5.1. For SFG, the following databases are also supported: DB2, Oracle and Microsoft SQL Server. In the case of ITXA, supported databases are DB2, Oracle, and Microsoft SQL. So long as one of these databases are being used for SFG and ITXA, the deployments only require that your databases are accessible from within the cluster. This may mean that your databases are deployed on premises, as a SaaS, or some other form of deployment. 
        • If you choose to use a different database and/or a different deployment for your database other than that which is outlined in this blog, you may need to change certain values in the YAML files for SFG and ITXA such as the database vendor, driver, host, etc... Also, more configurations may be required to deploy the databases themselves to make them compatible with SFG and ITXA. For more information regarding configuring your desired database, check out the SFG and ITXA documentations.
      • The SFG Helm chart is version 3.0.1 which uses SFG version 6.2.0.1 and the ITXA Helm chart is version 1.0.1 which uses ITXA version 10.0.1.8. Some of the older versions of the Helm charts will not work with these installation steps.
      • SFG version 6.2.0.1 and ITXA version 10.0.1.8 are compatible. Switching either SFG or ITXA version may cause compatibility issues. For specifics regarding compatibility for your deployment, refer to the ITXA integration section of the SFG docs.

Deployment Order

As outlined in the documentation for SFG and ITXA, databases must already be deployed and they must be accessible from within the cluster. Also, because ITXA is to be integrated with SFG, SFG requires the ITXA ingress host URL to exist prior to integrating ITXA. 

With these considerations in mind, this is the order of deployment and integration for this blog:

      1. ITXA and SFG DB2 Database Installation
      2. ITXA Installation
      3. SFG Installation
      4. ITX Integration
      5. ITXA Integration

Readability Convention

As a naming convention, where a variable is to be declared and used throughout the installation and that variable is one of the following: 

      • A variable that should not be publicly available for security reasons such as certain IP addresses and/or secrets.
      • A variable that may be specific to your cluster / namespace.
      • A variable whose name is not clear or descriptive enough.

I will use the following syntax: <Variable Name / Description> to reduce confusion. The name or description I use will not change between mentions of the same variable. When this syntax is used, they will be highlighted to indicate that they must be changed in your environment. Ex: <Change Me>.

Note that in some of the YAML definitions, the variable <Variable Name / Description> is contained between quotes. In these cases, the quotes contain the variable and remain in the YAML after replacing it with your specific value.

Packs

This deployment does not cover installing industry packs for ITXA. To install packs, you must first ensure you are entitled to them. For instructions on installing industry packs for ITXA, refer to the ITXA docs section on installing and deploying industry packs on ITXA.

Logging Into an OpenShift or Kubernetes Cluster

For instructions on how to login to an OpenShift cluster, refer to the official OpenShift CLI (oc) documentation.

For instructions on how to login to a Kubernetes cluster, refer to the official Kubernetes documentation on how to login to a cluster.

Helm

These installations make use of Helm v3. With regards to SFG and ITXA, using a different version of Helm may change how the charts should be handled. For issues regarding Helm, refer to the Helm documentation on how to install Helm and the commands available to you via the Helm CLI. IBM’s SFG version 3.0.1 Helm chart and ITXA version 1.0.1 Helm chart are available under the Resources subsection. For the purposes of this blog, Helm allows for automating the containerized deployments and upgrades of SFG and ITXA when used in conjunction with a provided and configurable YAML file. This YAML file is used to define relative configuration for the charts. The key here is to ensure that the file properly defines the necessary deployment configurations that fit your needs.

DB2 Installation

Installation Prerequisites

These installation steps assume that your cluster environment has access to pull the image for DB2.

Namespace and Service Account

For this installation, I am going to use one instance of DB2 running in a new namespace in the cluster so that ITXA and SFG can both access it and their own individual databases within it. This should not be done in a production environment as it is best practice to isolate the ITXA and SFG database instances. To do this, I will create the new namespace for the DB2 instance:

oc new-project db2

I'll then create a service account within the namespace and give it necessary permissions:

oc create serviceaccount db2-sa

Or, if using kubectl commands:

kubectl create namespace db2

kubectl create serviceaccount db2-sa

You will then need to assign Role Based Access Control to the service account. I'll create a Role and RoleBinding by placing the following into a file named db2_rbac.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: db2-role
  namespace: db2
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "pods/exec"]
  verbs: ["get", "list", "patch", "watch", "update", "create"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list"]
- apiGroups: ["batch", "extensions"]
  resources: ["jobs", "deployments"]
  verbs: ["get", "list", "watch", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: db2-rolebinding
  namespace: db2

roleRef:
  kind: Role
  name: db2-role
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: db2-sa

Then, to create the Role and RoleBinding, I’ll run either of the following commands:

oc create -f db2_rbac.yaml

kubectl create -f db2_rbac.yaml

To confirm all the above have been created, I’ll run the following set of commands and ensure I see my created resource for each:

kubectl get sa -n db2 | grep db2-sa
kubectl get roles -n db2 | grep db2-role
kubectl get rolebindings -n db2 | grep db2-rolebinding

Database Setup

I am now going to install my ITXA DB2 instance and a load balancer for it using the following YAML file as a template. I'll name this YAML file db2_deploy.yaml. The YAML file uses ibmc-block-gold as the storage class for the persistent volume which is available to my OpenShift cluster under the IBM Cloud. You can use a different volume storage class available to your cluster (which can be found by running kubectl get storageclasses) instead: 

apiVersion: v1
kind: Service
metadata:
  name: db2-lb
  namespace: db2

spec:
  selector:
    app: db2
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 50000
    targetPort: 50000
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: db2
spec:
  selector:
    matchLabels:
      app: db2
  serviceName: "db2"
  replicas: 1
  template:
    metadata:
      labels:
        app: db2
    spec:
      serviceAccount: db2
      containers:
      - name: db2
        securityContext:
          privileged: false
        image: icr.io/db2_community/db2:11.5.5.1
        env:
        - name: LICENSE 
          value: accept 
        - name: DB2INSTANCE 
          value: db2inst1 
        - name: DB2INST1_PASSWORD 
          value: <Your DB2 Password>      
        ports:
        - containerPort: 50000
          name: db2
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /database
          name: db2vol
  volumeClaimTemplates:
  - metadata:
      name: db2vol
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      storageClassName: ibmc-block-gold

I'll create my DB2 resources by running the following command:

oc create -f db2_deploy.yaml

To confirm that the DB2 pod has started correctly, I’ll run the following command:

kubectl get pods | grep db2

Once my pod is running, the above command will produce an output like:

db2-0     1/1     Running

With my DB2 pod running, I'll open a SSH session for the pod and switch to the DB2 user:

oc rsh <DB2 Pod>

su - db2inst1

If you are using kubectl, you can use the following command to get a shell into the pod before switching the user:

kubectl exec --stdin --tty <DB2 POD> -- /bin/bash

Then, if prompted, I'll authenticate my session by providing <Your DB2 Password> I defined in the YAML file above.

Logged in as the DB2 user, I’ll first verify that DB2 is running. To do this, I run the following command:

db2start

The output “…The database manager is already active” indicates DB2 is running.

Next I am going to create two SQL files which will create the ITXA and SFG databases. I will name these databases ITXADB and SFG and name the files create_itxa_db.sql and create_sfg_db.sql. Note that any filenames will work.

In the create_itxa_db.sql file, I will put the following to create my ITXA database:

CREATE DATABASE ITXADB AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768;

Similarly, in the create_sfg_db.sql file, I'll write the following:

CREATE DATABASE SFGDB AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768;

Now I'm going to create the databases with the files I just made. The order of the following statements does not matter, but you must wait for the first database to finish being created (which may take a few minutes) before executing the next statement:

db2 -stvf create_itxa_db.sql

db2 -stvf create_sfg_db.sql

After a few minutes, both ITXA and SFG databases have been created. I'm going to take note of the following information from my database configuration which I know I'll need later for ITXA and SFG installations. You can find the <DB2 LB Cluster IP> by running either of the following commands in the db2 namespace and looking for the cluster IP of the db2-lb service:

oc get services

kubectl get services

      • Vendor of the database (dbType): db2
      • Cluster IP address of the load balancer (dbHostIp): <DB2 LB Cluster IP>
      • Port for the database load balancer (dbPort): 50000
      • Database User (databaseName): db2inst1
      • Database Name (dbData): ITXADB / SFGDB
      • Database Password: <DB2 Password>

ITXA Installation

Prerequisites

Prior to installation, I am going to ensure I've met the prerequisites listed under the installation prerequisites for ITXA. I will also need my OpenShift cluster’s subdomain URL which, in my case, is us-south.containers.appdomain.cloud.

Installation

Because I am integrating ITXA with SFG, it is suggested in the docs that you may install ITXA in the same namespace as SFG. I am installing ITXA before SFG, so I'll create a new namespace for them both. I'll name the project sfg-itxa-nonprod, but it can be named anything:

oc new-project sfg-itxa-nonprod

oc adm policy add-scc-to-user anyuid -z default -n sfg-itxa-nonprod

Or, if using kubectl, you can accomplish the same with the following steps and then using the Role Based Access Control steps outlined in the Helm chart README file.

If using kubectl, first create the namespace and service account:

kubectl create namespace sfg-itxa-nonprod

kubectl create serviceaccount default

Then, if using kubectl, create a role and rolebinding by placing the following resource definitions into a file named itxa_rbac.yaml:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-itxa-role
namespace: sfg-itxa-nonprod
rules:
  - apiGroups: ['route.openshift.io']
    resources: ['routes','routes/custom-host']
    verbs: ['get', 'watch', 'list', 'patch', 'update']
  - apiGroups: ['','batch']
    resources: ['secrets','configmaps','persistentvolumes','persistentvolumeclaims','pods','services','cronjobs','jobs']
    verbs: ['create', 'get', 'list', 'delete', 'patch', 'update']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-itxa-rolebinding
namespace: sfg-itxa-nonprod
subjects:
  - kind: ServiceAccount
  name: default
  namespace: sfg-itxa-nonprod
roleRef:
  kind: Role
name: ibm-itxa-role
  apiGroup: rbac.authorization.k8s.io 

To create the role and role binding, you can then run the command:

oc create -f itxa_rbac.yaml

I then create the following secrets in the sfg-itxa-nonprod namespace according to those found under ibm-itxa-prod-blog/ibm_cloud_pak/pak_extensions/pre-install/secret found in the ITXA Helm chart.

The following secrets are for database connection information, TLS KeyStore password, and the password for the ITXA admin user:

apiVersion: v1
kind: Secret
metadata:
name: itxa-db-secret
type: Opaque
stringData:
databaseName: ITXADB
dbUser: db2inst1
dbPassword: <DB2 Password>
dbHostIp: "<DB2 LB Cluster IP>"
dbPort: "50000"
apiVersion: v1
kind: Secret
metadata:
name: tls-keystore-secret
type: Opaque
stringData:
tlskeystorepassword: <Your TLS Keystore Password>
apiVersion: v1
kind: Secret
metadata:
name: itxa-user-secret
type: Opaque
stringData:
adminPassword: "<Your Password for Admin User>"


With the secrets made, I will copy the ITXA values.yaml file into a new file called override.yaml and modify the following values to meet my specification:

global.license: true
global.tlskeystoresecret: "tls-keystore-secret"
global.database.dbvendor: DB2
global.database.dbDriver: "db2jcc4.jar"
global.persistence.useDynamicProvisioning: true
global.install.itxaUI.enabled: true
global.install.itxadbinit.enabled: true
itxauiserver.ingress.host: "asi.us-south.containers.appdomain.cloud"
itxauiserver.ingress.ssl.enabled: false
itxauiserver.ingress.ssl.secretname: ""
itxadatasetup.dbType: "db2"

Note that itxauiserver.userSecret is already defaulted to "itxa-user-secret" which is the secret I created previously for the admin user password, global.appsecret is already defaulted to "itxa-db-secret" which is the secret I created for my DB2 database connection information. I also specified global.tlskeystoresecret to "tls-keystore-secret" as it is the secret I made for my KeyStore password.

To setup my ITXA installation for integration with B2Bi/SFG and SSO (Single Sign-On), I will also need to edit the customer_overrides.properties file in the ITXA Helm chart which is in the chart’s config directory. In the file I will place the following properties:

HostApplication.name=SBI
HostApplication.migrationStylesheet=ie_si_to_spe_hosted
HostApplication.driverName=InvokeSIBP
HostApplication.driverClass=com.ibm.spe.core.drivers.DriverInvokeSIBusinessProcess
HostApplication.restURL=https://<B2Bi/SFG Helm Release Name>-b2bi-asi-frontend-svc:<B2Bi/SFG ASI Frontend Service REST Adapter Port>/restwar/restapi/v1.0

Where <B2Bi/SFG Helm Release Name> is the name I will give my B2Bi/SFG Helm release when installing and <B2Bi/SFG ASI Frontend Service REST Adapter Port> is the value given in the B2Bi/SFG Helm values.yaml (or override.yaml) under asi.frontendService.ports.restHttpAdapter.port.

I will name my B2Bi/SFG release my-sfg-release and the default value for the REST HTTP Adapter port (which I will not change) is 35007. So, my specific host application REST URL property is:

HostApplication.restURL=https://my-sfg-release-b2bi-asi-frontend-svc:35007/restwar/restapi/v1.0

Note: The B2Bi/SFG Helm chart creates the services with the Helm release name prepended to it which is why I can set this property prior to installing B2Bi/SFG.

Now that my ITXA database and ITXA Helm chart contents are ready, I'll run this Helm command (which I modified from the one given in the ITXA README file) to install ITXA:

helm install my-itxa-release -f override.yaml --timeout 3600s .

After the ITXA database setup finishes (which may take some time), I will run another Helm upgrade to change the following value back to false in the ITXA Helm chart's override.yaml file I created:

global.install.itxadbinit.enabled: false

SFG Installation

Installation Prerequisites

These installation steps assume that your cluster environment aligns with ii. I will also need my OpenShift cluster’s subdomain URL which, in my case, is us-south.containers.appdomain.cloud.

SFG

If you are installing SFG without ITXA, this is where you would create the namespace for SFG, refer to the ITXA installation steps for instructions on properly establishing a new namespace.

I will then create a Role and RoleBinding by placing the following YAML definitions into a file named sfg_rbac.yaml:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-sfg-role
namespace: sfg-itxa-nonprod
rules:
  - apiGroups: ['route.openshift.io']
    resources: ['routes','routes/custom-host']
    verbs: ['get', 'watch', 'list', 'patch', 'update']
  - apiGroups: ['','batch']
    resources: ['secrets','configmaps','persistentvolumes','persistentvolumeclaims','pods','services','cronjobs','jobs']
  verbs: ['create', 'get', 'list', 'delete', 'patch', 'update']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-sfg-rolebinding
namespace: sfg-itxa-nonprod
subjects:
  - kind: ServiceAccount
    name: default
  namespace: sfg-itxa-nonprod
roleRef:
  kind: Role
name: ibm-sfg-role
  apiGroup: rbac.authorization.k8s.io

With the file created, I can run the following command to create the role and rolebinding:

oc create -f sfg_rbac.yaml

The SFG documents mention I need to make secrets for my containerized deployment which I can find under the ibm_cloud_pak/pak_extensions/pre-install/secret folder found in the Helm chart. However, for my installation I won't provide KeyStore, TrustStore, or JMS secrets.

The system passphrase secret for SFG is used to start the system and to access protected system information:

apiVersion: v1
kind: Secret
metadata:
  name: b2b-system-passphrase-secret
type: Opaque
stringData:
SYSTEM_PASSPHRASE: <Your System Passphrase>

The database secret is used to store the authentication information for the database instance:

apiVersion: v1
kind: Secret
metadata:
  name: b2b-db-secret
type: Opaque
stringData:
DB_USER: db2inst1
DB_PASSWORD: <DB2 Password>

Both of the above secrets can be placed in individual YAML files, I’ll name them system_passphrase_secret.yaml and b2b_db_secret.yaml. Though you can name them anything. To create the secrets, I’ll run either of the following sets of oc commands:

oc create -f system_passphrase_secret.yaml 

oc create -f b2b_db_secret.yaml

Or, if using kubectl:

kubectl create -f system_passphrase_secret.yaml

kubectl create -f b2b_db_secret.yaml

I'll then create a copy of the values.yaml file found in the Helm chart. I'll name this copy override.yaml. The values I'm going to change in it are:

ac.backendService.ports:
- name: adapter-1
nodePort: 30201
port: 30201
protocol: TCP
targetPort: 30201
ac.ingress.external.tls.enabled: false
ac.ingress.internal.host: ac.us-south.containers.appdomain.cloud
ac.ingress.internal.tls.enabled: false
 
api.ingress.internal.host: api.us-south.containers.appdomain.cloud
api.ingress.internal.tls.enabled: false
api.internalAccess.enableHttps: false
 
appResourcesPVC.enabled: false
 
asi.backendService.ports:
- name: adapter-1
  nodePort: 30201
  port: 30201
  protocol: TCP
  targetPort: 30201
asi.ingress.external.tls.enabled: false
asi.ingress.internal.host: asi.us-south.containers.appdomain.cloud
asi.ingress.internal.tls.enabled: false
asi.internalAccess.enableHttps: false
 
global.license: true
 
persistence.useDynamicProvisioning: true
 
purge.internalAccess.enableHttps: false
purge.schedule: 0 0 * * *
 
resourcesInit.enabled: true
setupCfg.adminEmailAddress: <Your Admin Email Address>
setupCfg.dbData: SFGDB
setupCfg.dbDrivers: db2jcc4.jar
setupCfg.dbHost: <DB2 LB Cluster IP>
setupCfg.dbPort: 50000
setupCfg.dbSecret: b2b-db-secret
setupCfg.dbVendor: db2
setupCfg.smtpHost: localhost
setupCfg.systemPassphraseSecret: b2b-system-passphrase-secret
setupCfg.useSslForRmi: false

From within the SFG Helm chart folder, I'll run the following command install SFG in my namespace with the release name my-sfg-release:

helm install my-sfg-release --namespace sfg-itxa-nonprod --timeout 90m0s -f override.yaml .

(Optional) Enabling TLS/SSL for B2Bi/SFG

To enable TLS/SSL for B2Bi/SFG using the values in the Helm chart, you will need to leave the following values as their defaults / the values I give below:

ac.ingress.external.tls.enabled: true

ac.ingress.internal.tls.enabled: true

api.ingress.internal.tls.enabled: true

api.internalAccess.enableHttps: true

asi.ingress.external.tls.enabled: true

asi.ingress.internal.tls.enabled: true

asi.internalAccess.enableHttps: true

purge.internalAccess.enableHttps: true

setupCfg.useSslForRmi: true

If the above values are enabled in your values.yaml / override.yaml file, the B2Bi/SFG Helm deployment will handle the creation of necessary OpenShift resources for TLS. This is done via the automatic self-signed certificate creation done with OpenShift Routes.

In Kubernetes, where Routes do not exist, the B2Bi/SFG pre-install TLS setup job will create the self-signed certificate Secrets needed for internal TLS enablement. That is, all *.internalAccess.tlsSecretName will be created for the internal frontend service communication.

It is important to note though that no *.ingress.internal.tls.secretName or *.ingress.external.tls.secretName used only in Kubernetes deployments for ingress will be created by the TLS pre-install job. These will need to be manually set and specified in the Helm chart.

A Note on B2Bi / SFG TLS Values

Application TLS secrets (asi/ac/api.internalAccess.tlsSecretName) are used in both OpenShift and Kubernetes deployments. As previously mentioned, these secrets will get automatically created, if not configured, during the deployment either by the respective services in the case of OpenShift or by the B2Bi/SFG pre-install TLS setup job in the case of a Kubernetes deployment. If given in the Helm values.yaml / override.yaml, these will be used instead.

The TLS key / cert pair is then added to the respective keystores for ASI dashboard or APIs on liberty for end-to-end SSL.

The secret names you can give for *.ingress.internal / *.ingress.external apply only to Kuberenetes deployments for configuring TLS on Ingresses. B2Bi/SFG Kubernetes deployments use Ingress objects instead of Routes for external access.

(Optional) Creating and Using Custom TLS Certificates for Internal Access

As mentioned above in A Note on B2Bi / SFG TLS Values, the application TLS secrets (asi/ac/api.internalAccess.tlsSecretName) get automatically created either through OpenShift or the pre-install TLS setup job. However, these TLS secrets can be specified in the Helm chart to manually give the TLS certificates for internal access. A common reason to manually set these secrets is to provide your own CA-signed certificates as opposed to using the self-signed certificates set by the TLS setup job or OpenShift. 

It is important to note that any of these secrets not specified in the Helm values will be automatically created, but not all secrets need to be given if you wish to only provide some. For example, you could provide asi.internalAccess.tlsSecretName and api.internalAccess.tlsSecretName, but leave  ac.internalAccess.tlsSecretName  empty to allow the chart to handle it automatically.

I will give an example of providing a TLS secret for ASI only (asi.internalAccess.tlsSecretName). The name of the secret can be anything, but I’ll call mine asi-internal-tls-secret. I’ll do this with openssl.

First, I’ll ensure that I am logged into the cluster in which I will install or have already installed B2Bi/SFG. Then, in my terminal, I’ll run the following commands to generate the secret:

openssl req -x509 -nodes -sha256 -subj "/CN=<B2Bi/SFG Helm Release Name>-b2bi-asi-frontend-svc.b2bi-ssl.svc" -days 730 -newkey rsa:2048 -keyout tls.key -out tls.crt

kubectl create secret tls asi-internal-tls-secret --key tls.key --cert tls.crt -n sfg-itxa-nonprod

rm -f tls.key tls.crt

Note: a Kubernetes TLS secret takes a public/private key pair. Additionally, the Common Name given for the certificate created is relative to the service for ASI (change this depending on which certificate is being made).

Finally, when I edit my B2Bi/SFG values.yaml / override.yaml I’ll ensure that I specify asi.internalAccess.tlsSecretName: asi-internal-tls-secret.

Creating and Using TLS Certificates for Kubernetes Ingress (Required if Using TLS on Kubernetes Ingress)

Note: the following steps assume you have a CA certified TLS certificate and private key for each of your enabled internal / external hosts. This is because these certificates should be from official certificate authorities as opposed to self-signed. However, for internal development testing purposes, you can use self-signed certificates and keys which can be created in the same way I did in the previous section for creating the internal TLS secrets via openssl.

Kubernetes installations of SFG/B2Bi will need to have their ingress secrets created manually prior to installation. To do so, I’ll run the following command for each certificate / private key pair:

kubectl create secret tls "<New Ingress Secret Name>" --key <Path to Key File> --cert <Path to Certificate File> -n sfg-itxa-nonprod

Repeat the above command for each new TLS secret, replacing the variables with your environment-specific configuration.

In my environment, on my local machine I have three directories for my ingress TLS files, they are ac-tls, asi-tls, and api-tls. The contents of each directory are:

ac-tls/internal.key
ac-tls/internal.crt
ac-tls/external.key
ac-tls/external.crt
api-tls/internal.key
api-tls/internal.crt
asi-tls/internal.key
asi-tls/internal.crt
asi-tls/external.key
asi-tls/external.crt

With the above files I’ll create 5 TLS secrets for ingress called:

      • ac-tls-internal-secret
      • ac-tls-external-secret
      • api-tls-internal-secret
      • asi-tls-internal-secret
      • asi-tls-external-secret

For example, to create the ac-tls-internal-secret, I’ll run the above command:

kubectl create secret tls ac-tls-internal-secret --key ac-tls/internal.key --cert ac-tls/internal.crt -n sfg-itxa-nonprod

With every secret made, I’ll specify the following in my override.yaml file:

ac.ingress.external.tls.secretName: “ac-tls-external-secret”
ac.ingress.internal.tls.secretName: “ac-tls-internal-secret”

api.ingress.internal.tls.secretName: “api-tls-internal-secret”

asi.ingress.external.tls.secretName: “asi-tls-external-secret”
asi.ingress.internal.tls.secretName: “asi-tls-internal-secret”

For the changes to take effect, I’ll then need to run a helm install or helm upgrade.

Patching OpenShift Routes with New Certificates

Note: the following steps assume you have a CA certified TLS certificate, private key, and CA bundle file.

Though SSL for OpenShift Routes is automatically configured for you with self-signed certificates via the B2Bi/SFG Helm chart, it is strongly recommended that you obtain a CA certified TLS certificate and update the Routes manually.

To do this, create a script on the machine where you run OpenShift commands. In this script (I’ll name my script patchRoutes.sh) I will put the following:

CRT_FN=<Path to Certificate>
KEY_FN=<Path to Private Key>
CABUNDLE_FN=<Path to CA Bundle File>
CERTIFICATE="$(awk '{printf "%s\\n", $0}' ${CRT_FN})"
KEY="$(awk '{printf "%s\\n", $0}' ${KEY_FN})"
CABUNDLE=$(awk '{printf "%s\\n", $0}' ${CABUNDLE_FN})
oc patch route $(oc get routes -l release=<Release_name> -o jsonpath="{.items[*].metadata.name}") -p '{"spec":{"tls":{"certificate":"'"${CERTIFICATE}"'", "key":"'"${KEY}"'" ,"caCertificate":"'"${CABUNDLE}"'"}}}'

Note that I will fill in the following variables with my environment-specific details:

      • <Path to Certificate> - absolute path to your certificate.
      • <Path to Private Key> - absolute path to the corresponding private key.
      • <Path to CA Bundle File> - absolute path to the CA Bundle File.
      • <Release name> - your B2Bi/SFG Helm release name given at install time.

Once ensuring that I am logged into my OpenShift cluster and in the namespace of my B2Bi/SFG installation (oc project sfg-itxa-nonprod ), I’ll run the script to update the Routes:

./patchRoutes.sh

Verification

The resulting SFG installation will have created a set of routes and services to connect to your instance. To access the SFG dashboard, I first need to find that the Helm release is deployed by running the following command to see that it is deployed:

helm status my-sfg-release

I will then check the status of the ASI, AC, and API pods that were deployed by the chart and ensure that they have the 'Running' status:

oc get pods -l release=my-sfg-release -n sfg-itxa-nonprod -o wide

Or, if using kubectl, by running:

kubectl get pods -l release=my-sfg-release -n sfg-itxa-nonprod -o wide

Finally, to connect to my SFG dashboard, I will find the URL with the following template:

<Internal ASI Ingress Hostname>/dashboard

For me, because my internal ASI ingress hostname was asi.us-south.containers.appdomain.cloud, my dashboard URL is:

asi.us-south.containers.appdomain.cloud/dashboard

You can also find the exact route URL by running either of the following oc / kubectl commands:

oc get routes -o wide | grep dashboard

kubectl get routes -o wide | grep dashboard

Following this URL brings me to my SFG dashboard where I can login with my authentication details:

The default login credentials are User ID = 'admin' and password= 'password'. After changing the admin login password, I find the SFG dashboard:

ITX Integration

Prerequisites

Prior to integration, I'll check that I meet the prerequisites listed under the i docs page.

Integration

To integrate ITX with SFG, I'll follow the steps outlined in the docs for SFG and modify my current SFG Helm release by changing the following in the override.yaml file of the running SFG deployment's Helm chart:

itxIntegration.enabled: true

itxIntegration.dataSetup.enabled: true

I'll then upgrade the Helm chart and, as mentioned in the docs, make sure to turn the database setup variable back to false after the upgrade:

itxIntegration.dataSetup.enabled: false

Verification

I'll verify my installation by checking that the ITX Map Service is now enabled in the SFG dashboard.

I know that within my SFG dashboard, the service is found by following this path:

"Administration Menu" -> "Deployment" -> "Services" -> "Configuration" -> "List" -> "Search by Service Type"

ITXA Integration

Prerequisites

Prior to integration, I am again going to ensure I've met the prerequisites listed under the i docs page. I will also need the ASI URL I provided in both the ITXA and SFG setup, which for me is asi.us-south.containers.appdomain.cloud.

Integration

ITXA is deployed in the same namespace as SFG. So, I'll run a Helm upgrade on my SFG deployment to integrate it. Keeping in mind that integrations.itxaIntegration.appSecret maps to the ITXA DB2 connection secret similarly to how it did in the ITXA Helm chart, and by following the SFG documentation, these are the values to be changed in the SFG override.yaml file to integrate ITXA: 

integrations.itxaIntegration.appSecret: itxa-db-secret
integrations.itxaIntegration.dataSetup.enabled: true
integrations.itxaIntegration.enabled: true
integrations.itxaIntegration.sso.host: asi.us-south.containers.appdomain.cloud
integrations.itxaIntegration.sso.port: 80
integrations.itxaIntegration.sso.ssl.enabled: false

Note that integrations.itxaIntegration.sso.port above, which defaults to 443 in the SFG YAML file, has been changed to 80 because this is the OCP default HTTP ingress port for listening and because I have disabled SSL, the default encrypted HTTPS port 443 will not work.

I will now run a Helm upgrade on my SFG deployment to finish the ITXA integration, followed by one final Helm upgrade to ensure I set the database setup job back to false:

integrations.itxaIntegration.dataSetup.enabled: false

Verification

To verify my installation, I find that the following services are now enabled in the SFG dashboard:

      • SPE Check Pending Ack Status Service

      • SPE De-enveloping Service

      • SPE Enveloping Service

      • SPE Transformation Service

I found these services by following:

"Administration Menu" -> "Deployment" -> "Services" -> "Configuration" -> "Search" -> "Service Name:" -> "SPE" -> "Go!"

I will also verify that I can access the Standards Processing Engine (SPE) Trading Partner UI by following:

"Administration Menu" -> "Standards Processing Engine" -> "SPE Server" -> "Launch SPE Trading Partner UI"

If you cannot access the SPE Trading Partner UI, ensure that the ingress host ports are properly matching for ITXA and SFG and that the ITXA UI server port is changed from 443 to 80. If you still cannot access it, refer to the bottom of the page in the docs dedicated for integrating ITXA for steps on changing certain configuration properties and other troubleshooting steps.

Finally, I'll run an ITXA sample from my ASI server pod. In this example I will run the Healthcare sample pack. First, I will open an SSH session into my ASI UI server pod which is named sfg-itxa-nonprod-b2bi-asi-server-0 in my namespace. I'll run the following command:

oc rsh sfg-itxa-nonprod-b2bi-asi-server-0

Once in the pod, I will navigate to where the SPE pack script files are located:

cd /opt/IBM/spe/bin

I will now run the spesetup.sh script:

. ./spesetup.sh

After the setup script finishes, I run the setup script for the Healthcare sample pack setup:

./spesetupsamples-packhc.sh

Once the setup script for the Healthcare sample pack finishes, I will then change directories back into bin and run the Healthcare sample pack:

cd bin

./sperunsamples-packhc.sh

And, once prompted, provide standard: hipaa.

Resources

Helm Charts

SFG Version 6.2.0.1

ITXA Version 10.0.1.8

Installation Document References

Installing Sterling SFG / B2B Integrator using Certified Containers

Installing ITXA using Certified Containers

ITX and ITXA Integration with Sterling B2B Integrator

Acronyms

  • OCP: OpenShift Container Platform

  • SFG: IBM Sterling File Gateway

  • ITX: IBM Sterling Transformation Extender

  • ITXA: IBM Sterling Transformation Extender Advanced

  • B2B(i): Business to Business (Integrator)

Troubleshooting

"Can't connect to the database"

Reasons:

If your deployment is unable to connect to the specified database, you will see an error like the following:

Failed to get a connection with the configured database parameters due to error : [jcc][t4][2043][11550][4.34.30] Exception java.net.ConnectException: Error opening socket to server / {DB Host IP} on port {DB PORT} with message: Connection timed out. ERRORCODE=-4499, SQLSTATE=08001. Please make sure the following database configurations are correct - DB_USER | DB_PASSWORD | DB_HOST | DB_PORT | DB_DATA | DB_DRIVERS

The error message indicates that the database setup job and / or the application pods cannot connect to the database.

The most likely reason you are seeing this issue is that there is a mistake in your database configuration such as the wrong database user, IP address, port, driver, etc..

Another reason why you may be seeing this issue is that IBM Sterling B2Bi/SFG deploys with restrictive Network Policies by default. Depending on which version of B2Bi/SFG you are installing, these network policies may be blocking connectivity to your database.

This blog's deployment does not encounter this issue because the database configuration properties are correct, and egress traffic is allowed from the namespace to any other namespace / port combination within the cluster by default.

Solutions:

Confirm Database Configuration
      1. Obtain necessary database configuration values (DB_USER, DB_PASSWORD, DB_HOST, DB_PORT, DB_DATA, DB_DRIVERS) from the database server / network. With that information, ensure that the values given in the B2Bi/SFG Helm chart and the database secret match.
Allowing Connection to Database from the Namespace (Network Policy fix)
      1. Create an additional custom network policy that allows B2Bi/SFG to connect to your database. Given that Kubernetes Network Policies are additive, creating an additional egress network policy to your database would allow connection to your database.
        Below is a template Network Policy that you can modify and use for your installation. It applies to pods created by my B2Bi/SFG Helm release, and allows egress traffic to the database using TCP:
        apiVersion: networking.k8s.io/v1
        kind: NetworkPolicy
        metadata:
          name: b2bi-sfg-db-egress-np
          namespace: sfg-itxa-nonprod
        spec:
          podSelector:
            matchLabels:
              release: my-sfg-release
          policyTypes:
            - Egress
          egress:
            - to:
                - ipBlock:
                    cidr: <Database IP>/32
                    except: []
              ports:
                - protocol: TCP
        port: <Database Port>
      2. (ONLY FOR NON-PRODUCTION DEVELOPMENT ENVIRONMENTS) Delete all B2Bi/SFG Network Policies. Deleting all Network Policies would remove all restrictions and allow all ingress and egress traffic.

Backend Service ClusterIP / NodePort Conflicts

Reasons:

IBM Sterling B2Bi/SFG creates a backend service for ASI and AC pods which both default to type LoadBalancer in the values.yaml file.

However, in some instances, if you were to change the types of either asi.backendService.type or ac.backendService.type an error may be thrown by the Helm validation step regarding the use of NodePort in combination with a Service type that does not use NodePorts. 

This blog's deployment does not encounter this issue because I leave both backend services as type LoadBalancer.

Solutions:

    1. The first solution to this issue is to leave the service as type LoadBalancer. If your cluster does not have the capability to create a public IP for the service, it will create a LoadBalancer service, fail to provision a public IP, but still assign a ClusterIP for internal access which you can use.
    2. If using a Service type that does not use NodePorts, review your asi.backendService.ports, asi.backendService.portRanges, ac.backendService.ports,  and asi.backendService.portRanges configurations and remove mentions of NodePorts or NodePort ranges. For example:

Before:

ac:
   backendService:
      type: ClusterIP
         ports:
            - name: adapter-1
              port: 30401
              targetPort: 30401
                            nodePort: 30401
              protocol: TCP

After:

ac:
   backendService:
      type: ClusterIP
         ports:
           - name: adapter-1
             port: 30401
             targetPort: 30401
             protocol: TCP

LoadBalancer Service(s) Pending

Reasons:

If your cluster is not setup to deploy LoadBalancers, but you have specified some services as type LoadBalancer in your YAML file, your Helm installation will be marked as successful. However, the services will be permanently stuck in a Pending state. This is because an external IP is unable to be provisioned.

You can check the status of your namespace’s services by running the following command:

kubectl get services -n sfg-itxa-nonprod

For each service of type LoadBalancer you will see an IP under the EXTERNAL-IP column if the IP provisioning was successful. If unsuccessful, you will see <pending> instead which indicates that an External IP has not been given to the Service.

Solutions:

    1. If your cluster should be provisioning an External IP for your LoadBalancer Services then you should review your cluster’s configuration and ensure that it has the capability to do so. Any service stuck in <pending> will be assigned an External IP once available. 
    2. You can configure the <pending> LoadBalancer services to be of type NodePort instead (NOT RECOMMENDED FOR PRODUCTION ENVIRONMENTS) to open a port on each worker node in the cluster for the application. This would allow external access to the application at the cost of security.
    3. You could also configure the <pending> LoadBalancer services to be of type ClusterIP  which would allow internal access within the cluster. However, this may not be accessible from outside the cluster.
0 comments
193 views

Permalink