Sterling B2B Integration

 View Only

Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container

By Connor McGoey posted Wed May 01, 2024 12:22 PM

  

Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container

Table of Contents

Introductory Notes

Helm

DB2 Installation

ITXA Installation

SFG Installation

ITX Integration

ITXA Integration

Resources

Acronyms

Introductory Notes

Products

IBM Sterling File Gateway (SFG) is a file transfer consolidation system that provides scalability and security. SFG can intelligently monitor, administer, route and transform high volumes of inbound and outbound files. 

IBM Transformation Extender (ITX) allows for any-to-any data transformation by automating transactions and validation of data between formats and standards.

IBM Transformation Extender Advanced (ITXA) provides additional support for and metadata for mapping, compliance checking and related processing functions for specific industries. Additionally, ITXA provides a user interface to interact with.

Intent

The purpose of this blog is to provide non-production details on how to deploy Sterling File Gateway and IBM Transformation Extender Advanced, and how to integrate ITX and ITXA with the SFG deployment. If your deployments need specific information not covered in this blog, refer to the documents for SFG and ITXA for more detailed information with regards to your deployments' needs.

Note that while SFG can be deployed without ITX and/or ITXA integration and ITXA can be deployed without SFG, this blog specifically covers integrating the three products in a containerized deployment.

This is intended as an example of one possible configuration for SFG and ITX/ITXA.

Proposed Deployment

What will be deployed is as follows:

    • An IBM Database 2 (DB2) v11.5.5.1 instance with two databases ITXADB and SFGDB running in it. With it, a load balancer to provide connection via cluster IP to the DB2 instance and a persistent volume for storage (Figure 3). 
    • A SFG v6.2.0.1 instance with ASI, AC, and API pods. The ASI pod will run ITX 10.1.2.0.20231130 and ITXA init 10.0.1.8-x86_64. It will be connected to the SFGDB database (Figure 1).
    • An ITXA v10.0.1.8-x86_64 instance running the ITXA UI Server Pod and ITX. It will be connected to the ITXADB database (Figure 2).
    • Two ITX persistent volumes connected to the SFG instance for logs and data.
    • A shared logs persistent volume connected to both the SFG and ITXA instances. This, like other the volumes for SFG and DB2. will be provisioned dynamically.

Presumptions

Prior to following the installation steps in this blog, it is important to note that the environment and resulting deployments should not be used to replicate and/or produce a production environment for SFG, ITX, and/or ITXA. Additionally, a few presumptions are made with regards to these installations and their steps:

      • These installations create a set of deployments that are not air-gapped. They disable certain security measures such as TLS, SSL, and HTTPS which should be used for production configurations. For steps on enabling these settings, refer to the documents for necessary secrets and configurations.

        In SFG, the following values are changed in its YAML file:
ac.ingress.external.tls.enabled: false
ac.ingress.internal.tls.enabled: false
api.ingress.internal.tls.enabled: false
api.internalAccess.enableHttps: false
asi.ingress.external.tls.enabled: false
asi.ingress.internal.tls.enabled: false
asi.internalAccess.enableHttps: false
purge.internalAccess.enableHttps: false
setupCfg.useSslForRmi: false

In ITXA, the following values are changed in its YAML file:

itxauiserver.ingress.ssl.enabled: false

Additionally, for ITXA integration, the following values are changed in its YAML file:

integrations.itxaIntegration.sso.ssl.enabled: false



      • These instructions were developed on an OpenShift cluster running in the IBM Cloud. However, kubectl commands have also been provided and the instructions should work in Kubernetes as well.
      • The Helm releases pull images for the deployments from the IBM container registry, for which the environment is already configured with required permissions and entitlement. Steps for configuring your development environment to pull the necessary images are referenced in the prerequisites for both SFG and ITXA.
      • The database used is DB2 version is 11.5.5.1. For SFG, the following databases are also supported: DB2, Oracle and Microsoft SQL Server. In the case of ITXA, supported databases are DB2, Oracle, and Microsoft SQL. So long as one of these databases are being used for SFG and ITXA, the deployments only require that your databases are accessible from within the cluster. This may mean that your databases are deployed on premises, as a SaaS, or some other form of deployment. 
        • If you choose to use a different database and/or a different deployment for your database other than that which is outlined in this blog, you may need to change certain values in the YAML files for SFG and ITXA such as the database vendor, driver, host, etc... Also, more configurations may be required to deploy the databases themselves to make them compatible with SFG and ITXA. For more information regarding configuring your desired database, check out the SFG and ITXA documentations.
      • The SFG Helm chart is version 3.0.1 which uses SFG version 6.2.0.1 and the ITXA Helm chart is version 1.0.1 which uses ITXA version 10.0.1.8. Some of the older versions of the Helm charts will not work with these installation steps.
      • SFG version 6.2.0.1 and ITXA version 10.0.1.8 are compatible. Switching either SFG or ITXA version may cause compatibility issues. For specifics regarding compatibility for your deployment, refer to the ITXA integration section of the SFG docs.

Deployment Order

As outlined in the documentation for SFG and ITXA, databases must already be deployed and they must be accessible from within the cluster. Also, because ITXA is to be integrated with SFG, SFG requires the ITXA ingress host URL to exist prior to integrating ITXA. 

With these considerations in mind, this is the order of deployment and integration for this blog:

      1. ITXA and SFG DB2 Database Installation
      2. ITXA Installation
      3. SFG Installation
      4. ITX Integration
      5. ITXA Integration

Readability Convention

As a naming convention, where a variable is to be declared and used throughout the installation and that variable is one of the following: 

      • A variable that should not be publicly available for security reasons such as certain IP addresses and/or secrets.
      • A variable that may be specific to your cluster / namespace.
      • A variable whose name is not clear or descriptive enough.

I will use the following syntax: <Variable Name / Description> to reduce confusion. The name or description I use will not change between mentions of the same variable.

Note that in some of the YAML definitions, the variable <Variable Name / Description> is contained between quotes. In these cases, the quotes contain the variable and remain in the YAML after replacing it with your specific value.

Packs

This deployment does not cover installing industry packs for ITXA. To install packs, you must first ensure you are entitled to them. For instructions on installing industry packs for ITXA, refer to the ITXA docs section on installing and deploying industry packs on ITXA.

Helm

These installations make use of Helm v3. With regards to SFG and ITXA, using a different version of Helm may change how the charts should be handled. For issues regarding Helm, refer to the Helm documentation on how to install Helm and the commands available to you via the Helm CLI. IBM’s SFG version 3.0.1 Helm chart and ITXA version 1.0.1 Helm chart are available under the Resources subsection. For the purposes of this blog, Helm allows for automating the containerized deployments and upgrades of SFG and ITXA when used in conjunction with a provided and configurable YAML file. This YAML file is used to define relative configuration for the charts. The key here is to ensure that the file properly defines the necessary deployment configurations that fit your needs.

DB2 Installation

Installation Prerequisites

These installation steps assume that your cluster environment has access to pull the image for DB2.

Namespace and Service Account

For this installation, I am going to use one instance of DB2 running in a new namespace in the cluster so that ITXA and SFG can both access it and their own individual databases within it. This should not be done in a production environment as it is best practice to isolate the ITXA and SFG database instances. To do this, I will create the new namespace for the DB2 instance:

oc new project db2

I'll then create a service account within the namespace and give it necessary permissions:

oc create serviceaccount <DB2 Service Account>

oc adm policy add-scc-to-user privileged -n db2 -z <DB2 Service Account>

Or, if using kubectl commands:

kubectl create namespace db2

kubectl create serviceaccount <DB2 Service Account>

You will then need to assign Role Based Access Control to the service account using the YAML file in the docs.

Database Setup

I am now going to install my ITXA DB2 instance and a load balancer for it using the following YAML file as a template. I'll name this YAML file db2_deploy.yaml. The YAML file uses ibmc-block-gold as the storage class for the persistent volume which is available to my OpenShift cluster under the IBM Cloud. You can use a different volume storage class available to your cluster instead: 

apiVersion: v1
kind: Service
metadata:
  name: db2-lb
spec:
  selector:
    app: db2
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 50000
    targetPort: 50000
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: db2
spec:
  selector:
    matchLabels:
      app: db2
  serviceName: "db2"
  replicas: 1
  template:
    metadata:
      labels:
        app: db2
    spec:
      serviceAccount: db2
      containers:
      - name: db2
        securityContext:
          privileged: true
        image: ibmcom/db2:11.5.5.1
        env:
        - name: LICENSE 
          value: accept 
        - name: DB2INSTANCE 
          value: db2inst1 
        - name: DB2INST1_PASSWORD 
        value: <Your DB2 Password>      
        ports:
        - containerPort: 50000
          name: db2
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /database
          name: db2vol
  volumeClaimTemplates:
  - metadata:
      name: db2vol
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      storageClassName: ibmc-block-gold

I'll create my DB2 resources by running the following command:

oc create -f db2_deploy.yaml

With my DB2 pod running, I'll open a SSH session for the pod and switch to the DB2 user:

oc rsh <DB2 Pod>

su - db2inst1

If you are using kubectl, you can use the following command to get a shell into the pod before switching the user:

kubectl exec --stdin --tty <DB2 POD> -- /bin/bash

Then, if prompted, I'll authenticate my session by providing <Your DB2 Password> I defined in the YAML file above.

Logged in as the DB2 user, I am going to create two SQL files which will create the ITXA and SFG databases. I will name these databases ITXADB and SFG and name the files create_itxa_db.sql and create_sfg_db.sql. Note that any filenames will work.

In the create_itxa_db.sql file, I will put the following to create my ITXA database:

CREATE DATABASE ITXADB AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768;

Similarly, in the create_sfg_db.sql file, I'll write the following:

CREATE DATABASE SFGDB AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768;

Now I'm going to create the databases with the files I just made. The order of the following statements does not matter, but you must wait for the first database to finish being created (which may take a few minutes) before executing the next statement:

db2 -stvf create_itxa_db.sql

db2 -stvf create_sfg_db.sql

After a few minutes, both ITXA and SFG databases have been created. I'm going to take note of the following information from my database configuration which I know I'll need later for ITXA and SFG installations. You can find the <DB2 LB Cluster IP> by running either of the following commands in the db2 namespace and looking for the cluster IP of the db2-lb service:

oc get services

kubectl get services

      • Vendor of the database (dbType): db2
      • Cluster IP address of the load balancer (dbHostIp): <DB2 LB Cluster IP>
      • Port for the database load balancer (dbPort): 50000
      • Database User (databaseName): db2inst1
      • Database Name (dbData): ITXADB / SFGDB
      • Database Password: <DB2 Password>

ITXA Installation

Prerequisites

Prior to installation, I am going to ensure I've met the prerequisites listed under the installation prerequisites for ITXA. I will also need my OpenShift cluster’s subdomain URL which, in my case, is us-south.containers.appdomain.cloud.

Installation

Because I am integrating ITXA with SFG, it is suggested in the docs that you may install ITXA in the same namespace as SFG. I am installing ITXA before SFG, so I'll create a new namespace for them both. I'll name the project sfg-itxa-nonprod, but it can be named anything:

oc new-project sfg-itxa-nonprod

oc adm policy add-scc-to-user anyuid -z default -n sfg-itxa-nonprod

Or, if using kubectl, you can accomplish the same with the following steps and then using the Role Based Access Control steps outlined in the Helm chart README file.

If using kubectl, first create the namespace and service account:

kubectl create namespace sfg-itxa-nonprod

kubectl create serviceaccount default

Then, if using kubectl, create the a role and rolebinding by placing the following resource definitions into a file named itxa_rbac.yaml:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-itxa-role
namespace: sfg-itxa-nonprod
rules:
  - apiGroups: ['route.openshift.io']
    resources: ['routes','routes/custom-host']
    verbs: ['get', 'watch', 'list', 'patch', 'update']
  - apiGroups: ['','batch']
    resources: ['secrets','configmaps','persistentvolumes','persistentvolumeclaims','pods','services','cronjobs','jobs']
    verbs: ['create', 'get', 'list', 'delete', 'patch', 'update']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-itxa-rolebinding
namespace: sfg-itxa-nonprod
subjects:
  - kind: ServiceAccount
  name: default
  namespace: sfg-itxa-nonprod
roleRef:
  kind: Role
name: ibm-itxa-role
  apiGroup: rbac.authorization.k8s.io 

To create the role and role binding, you can then run the command:

oc create -f itxa_rbac.yaml

I then create the following secrets in the sfg-itxa-nonprod namespace according to those found under ibm-itxa-prod-blog/ibm_cloud_pak/pak_extensions/pre-install/secret found in the ITXA Helm chart.

The following secrets are for database connection information, TLS KeyStore password, and the password for the ITXA admin user:

apiVersion: v1
kind: Secret
metadata:
name: itxa-db-secret
type: Opaque
stringData:
databaseName: ITXADB
dbUser: db2inst1
dbPassword: <DB2 Password>
dbHostIp: "<DB2 LB Cluster IP>"
dbPort: "50000"
apiVersion: v1
kind: Secret
metadata:
name: tls-keystore-secret
type: Opaque
stringData:
tlskeystorepassword: <Your TLS Keystore Password>
apiVersion: v1
kind: Secret
metadata:
name: itxa-user-secret
type: Opaque
stringData:
adminPassword: "<Your Password for Admin User>"


With the secrets made, I will copy the ITXA values.yaml file into a new file called override.yaml and modify the following values to meet my specification:

global.license: true
global.tlskeystoresecret: "tls-keystore-secret"
global.database.dbvendor: DB2
global.database.dbDriver: "db2jcc4.jar"
global.persistence.useDynamicProvisioning: true
global.install.itxaUI.enabled: true
global.install.itxadbinit.enabled: true
itxauiserver.ingress.host: "asi.us-south.containers.appdomain.cloud"
itxauiserver.ingress.ssl.enabled: false
itxauiserver.ingress.ssl.secretname: ""
itxadatasetup.dbType: "db2"

Note that itxauiserver.userSecret is already defaulted to "itxa-user-secret" which is the secret I created previously for the admin user password, global.appsecret is already defaulted to "itxa-db-secret" which is the secret I created for my DB2 database connection information. I also specified global.tlskeystoresecret to "tls-keystore-secret" as it is the secret I made for my KeyStore password. 

Now that my ITXA database and YAML file are ready, I'll run this Helm command (which I modified from the one given in the ITXA README file) to install ITXA:

helm install my-itxa-release -f override.yaml --timeout 3600s .

After the ITXA database setup finishes (which may take some time), I will run another Helm upgrade to change the following value back to false in the ITXA Helm chart's override.yaml file I created:

global.install.itxadbinit.enabled: false

SFG Installation

Installation Prerequisites

These installation steps assume that your cluster environment aligns with ii. I will also need my OpenShift cluster’s subdomain URL which, in my case, is us-south.containers.appdomain.cloud.

SFG

If you are installing SFG without ITXA, this is where you would create the namespace for SFG, refer to the ITXA installation steps for instructions on properly establishing a new namespace.

I will then create a Role and RoleBinding by placing the following YAML definitions into a file named sfg_rbac.yaml:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-sfg-role
namespace: sfg-itxa-nonprod
rules:
  - apiGroups: ['route.openshift.io']
    resources: ['routes','routes/custom-host']
    verbs: ['get', 'watch', 'list', 'patch', 'update']
  - apiGroups: ['','batch']
    resources: ['secrets','configmaps','persistentvolumes','persistentvolumeclaims','pods','services','cronjobs','jobs']
  verbs: ['create', 'get', 'list', 'delete', 'patch', 'update']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-sfg-rolebinding
namespace: sfg-itxa-nonprod
subjects:
  - kind: ServiceAccount
    name: default
  namespace: sfg-itxa-nonprod
roleRef:
  kind: Role
name: ibm-sfg-role
  apiGroup: rbac.authorization.k8s.io

With the file created, I can run the following command to create the role and rolebinding:

oc create -f sfg_rbac.yaml

The SFG documents mention I need to make secrets for my containerized deployment which I can find under the ibm_cloud_pak/pak_extensions/pre-install/secret folder found in the Helm chart. However, for my installation I won't provide KeyStore, TrustStore, or JMS secrets.

The system passphrase secret for SFG is used to start the system and to access protected system information:

apiVersion: v1
kind: Secret
metadata:
  name: b2b-system-passphrase-secret
type: Opaque
stringData:
SYSTEM_PASSPHRASE: <Your System Passphrase>

The database secret is used to store the authentication information for the database instance:

apiVersion: v1
kind: Secret
metadata:
  name: b2b-db-secret
type: Opaque
stringData:
DB_USER: db2inst1
DB_PASSWORD: <DB2 Password>

Both of the above secrets can be placed in individual YAML files, I’ll name them system_passphrase_secret.yaml and b2b_db_secret.yaml. Though you can name them anything. To create the secrets, I’ll run either of the following sets of oc commands:

oc create -f system_passphrase_secret.yaml 

oc create -f b2b_db_secret.yaml

Or, if using kubectl:

kubectl create -f system_passphrase_secret.yaml

kubectl create -f b2b_db_secret.yaml

I'll then create a copy of the values.yaml file found in the Helm chart. I'll name this copy override.yaml. The values I'm going to change in it are:

ac.backendService.ports:
- name: adapter-1
nodePort: 30201
port: 30201
protocol: TCP
targetPort: 30201
ac.ingress.external.tls.enabled: false
ac.ingress.internal.host: ac.us-south.containers.appdomain.cloud
ac.ingress.internal.tls.enabled: false
 
api.ingress.internal.host: api.us-south.containers.appdomain.cloud
api.ingress.internal.tls.enabled: false
api.internalAccess.enableHttps: false
 
appResourcesPVC.enabled: false
 
asi.backendService.ports:
- name: adapter-1
  nodePort: 30201
  port: 30201
  protocol: TCP
  targetPort: 30201
asi.ingress.external.tls.enabled: false
asi.ingress.internal.host: asi.us-south.containers.appdomain.cloud
asi.ingress.internal.tls.enabled: false
asi.internalAccess.enableHttps: false
 
global.license: true
 
persistence.useDynamicProvisioning: true
 
purge.internalAccess.enableHttps: false
purge.schedule: 0 0 * * *
 
resourcesInit.enabled: true
setupCfg.adminEmailAddress: <Your Admin Email Address>
setupCfg.dbData: SFGDB
setupCfg.dbDrivers: db2jcc4.jar
setupCfg.dbHost: <DB2 LB Cluster IP>
setupCfg.dbPort: 50000
setupCfg.dbSecret: b2b-db-secret
setupCfg.dbVendor: db2
setupCfg.smtpHost: localhost
setupCfg.systemPassphraseSecret: b2b-system-passphrase-secret
setupCfg.useSslForRmi: false

From within the SFG Helm chart folder, I'll run the following command install SFG in my namespace with the release name my-sfg-release:

helm install my-sfg-release --namespace sfg-itxa-nonprod --timeout 90m0s -f override.yaml .

Verification

The resulting SFG installation will have created a set of routes and services to connect to your instance. To access the SFG dashboard, I first need to find that the Helm release is deployed by running the following command to see that it is deployed:

helm status my-sfg-release

I will then check the status of the ASI, AC, and API pods that were deployed by the chart and ensure that they have the 'Running' status:

oc get pods -l release=my-sfg-release -n sfg-itxa-nonprod -o wide

Or, if using kubectl, by running:

kubectl get pods -l release=my-sfg-release -n sfg-itxa-nonprod -o wide

Finally, to connect to my SFG dashboard, I will find the URL with the following template:

<Internal ASI Ingress Hostname>/dashboard

For me, because my internal ASI ingress hostname was asi.us-south.containers.appdomain.cloud, my dashboard URL is:

asi.us-south.containers.appdomain.cloud/dashboard

You can also find the exact route URL by running either of the following oc / kubectl commands:

oc get routes -o wide | grep dashboard

kubectl get routes -o wide | grep dashboard

Following this URL brings me to my SFG dashboard where I can login with my authentication details:

The default login credentials are User ID = 'admin' and password= 'password'. After changing the admin login password, I find the SFG dashboard:

ITX Integration

Prerequisites

Prior to integration, I'll check that I meet the prerequisites listed under the i docs page.

Integration

To integrate ITX with SFG, I'll follow the steps outlined in the docs for SFG and modify my current SFG Helm release by changing the following in the override.yaml file of the running SFG deployment's Helm chart:

itxIntegration.enabled: true

itxIntegration.dataSetup.enabled: true

I'll then upgrade the Helm chart and, as mentioned in the docs, make sure to turn the database setup variable back to false after the upgrade:

itxIntegration.dataSetup.enabled: false

Verification

I'll verify my installation by checking that the ITX Map Service is now enabled in the SFG dashboard.

I know that within my SFG dashboard, the service is found by following this path:

"Administration Menu" -> "Deployment" -> "Services" -> "Configuration" -> "List" -> "Search by Service Type"

ITXA Integration

Prerequisites

Prior to integration, I am again going to ensure I've met the prerequisites listed under the i docs page. I will also need the ASI URL I provided in both the ITXA and SFG setup, which for me is asi.us-south.containers.appdomain.cloud.

Integration

ITXA is deployed in the same namespace as SFG. So, I'll run a Helm upgrade on my SFG deployment to integrate it. Keeping in mind that integrations.itxaIntegration.appSecret maps to the ITXA DB2 connection secret similarly to how it did in the ITXA Helm chart, and by following the SFG documentation, these are the values to be changed in the SFG override.yaml file to integrate ITXA: 

integrations.itxaIntegration.appSecret: itxa-db-secret
integrations.itxaIntegration.dataSetup.enabled: true
integrations.itxaIntegration.enabled: true
integrations.itxaIntegration.sso.host: asi.us-south.containers.appdomain.cloud
integrations.itxaIntegration.sso.port: 80
integrations.itxaIntegration.sso.ssl.enabled: false

Note that integrations.itxaIntegration.sso.port above, which defaults to 443 in the SFG YAML file, has been changed to 80 because this is the OCP default HTTP ingress port for listening and because I have disabled SSL, the default encrypted HTTPS port 443 will not work.

I will now run a Helm upgrade on my SFG deployment to finish the ITXA integration, followed by one final Helm upgrade to ensure I set the database setup job back to false:

integrations.itxaIntegration.dataSetup.enabled: false

Verification

To verify my installation, I find that the following services are now enabled in the SFG dashboard:

      • SPE Check Pending Ack Status Service

      • SPE De-enveloping Service

      • SPE Enveloping Service

      • SPE Transformation Service

I found these services by following:

"Administration Menu" -> "Deployment" -> "Services" -> "Configuration" -> "Search" -> "Service Name:" -> "SPE" -> "Go!"

I will also verify that I can access the Standards Processing Engine (SPE) Trading Partner UI by following:

"Administration Menu" -> "Standards Processing Engine" -> "SPE Server" -> "Launch SPE Trading Partner UI"

If you cannot access the SPE Trading Partner UI, ensure that the ingress host ports are properly matching for ITXA and SFG and that the ITXA UI server port is changed from 443 to 80. If you still cannot access it, refer to the bottom of the page in the docs dedicated for integrating ITXA for steps on changing certain configuration properties and other troubleshooting steps.

Finally, I'll run an ITXA sample from my ASI server pod. In this example I will run the Healthcare sample pack. First, I will open an SSH session into my ASI UI server pod which is named sfg-itxa-nonprod-b2bi-asi-server-0 in my namespace. I'll run the following command:

oc rsh sfg-itxa-nonprod-b2bi-asi-server-0

Once in the pod, I will navigate to where the SPE pack script files are located:

cd /opt/IBM/spe/bin

I will now run the spesetup.sh script:

. ./spesetup.sh

After the setup script finishes, I run the setup script for the Healthcare sample pack setup:

./spesetupsamples-packhc.sh

Once the setup script for the Healthcare sample pack finishes, I will then change directories back into bin and run the Healthcare sample pack:

cd bin

./sperunsamples-packhc.sh

And, once prompted, provide standard: hipaa.

Resources

Helm Charts

SFG Version 6.2.0.1

ITXA Version 10.0.1.8

Installation Document References

Installing Sterling SFG / B2B Integrator using Certified Containers

Installing ITXA using Certified Containers

ITX and ITXA Integration with Sterling B2B Integrator

Acronyms

  • OCP: OpenShift Container Platform

  • SFG: IBM Sterling File Gateway

  • ITX: IBM Sterling Transformation Extender

  • ITXA: IBM Sterling Transformation Extender Advanced

  • B2B(i): Business to Business (Integrator)

0 comments
60 views

Permalink