Sterling B2B Integration

 View Only

Installing IBM Sterling Control Center Monitor to Monitor SFG, SFTP Adapter Transfers, ITXA, and SSP Servers.

By Connor McGoey posted 7 days ago

  

Installing IBM Sterling Control Center Monitor to Monitor SFG, SFTP Adapter Transfers, ITXA, and SSP Servers.

Table of Contents

Introductory Notes

Helm Installation and Charts

DB2 Installation

CCM Installation

Configuring CCM to Monitor SFG

Configuring ITXA and CCM For Monitoring

Configuring IBM Sterling Secure Proxy Configuration Manager For Monitoring With OSA

Glossary

Resources

Acronyms

Introductory Notes

Product

IBM Sterling Control Center Monitor (CCM) "tracks the critical events across your B2B and managed file transfer (MFT) infrastructure for improved operations, customer service and B2B governance. It applies rules to alert key audiences when there is a problem with a server, process or transfer. Actionable Dashboards are customized for various types of users."

Intent

The purpose of this blog is to provide non-production details on how to deploy Sterling Control Center Monitor and use the default SFG/B2Bi Web Service to enable monitoring of SFG/B2Bi business processes and SFTP file transfers. Additionally, this blog covers the steps necessary to setup both ITXA and SSP CM to send events to CCM for monitoring. Each step covers all information necessary to deploy with this configuration. If your deployment needs specific information not covered in this blog or if you wish to learn more about some of the installation steps, refer to the Glossary or Resources subsections for additional information and/or links.

This blog does not cover deploying IBM Sterling Control Center Director as it does not work in containers.

Presumptions

Prior to following the installation steps in this blog, it is important to note that the environment and resulting deployments should not be used to replicate and/or produce a production environment for CCM. Additionally, a few presumptions are made with regards to these installations and their steps:

Proposed Deployment

What will be deployed is as follows:

    • An IBM Database 2 (DB2) v11.5.5.1 instance with one database CCMDB running in it. With it, a load balancer to provide connection via cluster IP to the DB2 instance and a persistent volume for storage. 
    • A CCM v6.3.1.0 instance with one pod, a service, and a route for connection. This instance will connect to the DB2 database.
    • One persistent volume for CCM for runtime environments such as logs and configuration.

Deployment Order

The order of deployment and configuration for this blog:

      1. CCM DB2 Database Installation and Configuration 
      2. CCM Installation
      3. Configuring CCM to Monitor SFG Business Processes and SFTP Adapter
      4. Configuring ITXA and CCM for Monitoring
      5. Configuring SSP CM for OSA Monitoring

As outlined in the presumptions, an SFG or B2Bi deployment, ITXA UI deployment, and an SSP CM deployment must be accessible from within the cluster. 

Helm Installation and Charts

These installations use Helm version 3.10.1. Helm versions 3.10.2-3.15.1 (most recent release) should work as well. IBM's Sterling Control Center Monitor version 3.1.3 Helm chart is available under the Resources subsection.

To install Helm, I first download version the 3.10.1 package from the GitHub repo.

With the tar downloaded I will unpack it and move the Helm binary to my bin folder:

tar -zxvf <Helm Package>

mv <Helm Binary> <bin Location>/bin/helm

You can check if Helm is installed and which version it is by running the following command in your command line:

helm version

DB2 Installation

Installation Prerequisites

These installation steps assume that your cluster environment has access to pull the image for DB2.

Namespace and Service Account

For this installation, I am going to use an instance of DB2 running in its own namespace to replicate what should be done in a production environment. To do this, I will create the new namespace for the DB2 instance:

oc new project ccm-db-nonprod

I'll then create a service account within the namespace and give it necessary permissions:

oc create serviceaccount <DB2 Service Account>

oc adm policy add-scc-to-user privileged -n ccm-db-nonprod -z <DB2 Service Account>

Or, if using kubectl commands:

kubectl create namespace ccm-db-nonprod

kubectl create serviceaccount <DB2 Service Account>

kubectl adm policy add-scc-to-user privileged -n ccm-db-nonprod -z <DB2 Service Account>

Database Setup

I am now going to install my DB2 instance and a load balancer for it using the following YAML file as a template. I'll name this YAML file db2_deploy.yaml. The YAML file uses ibmc-block-gold as the storage class for the Persistent Volume which is available to my OpenShift cluster under the IBM Cloud. You can use a different volume storage class available to your cluster instead: 

apiVersion: v1
kind: Service
metadata:
  name: db2-lb
spec:
  selector:
    app: db2
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 50000
    targetPort: 50000
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: db2
spec:
  selector:
    matchLabels:
      app: db2
  serviceName: "db2"
  replicas: 1
  template:
    metadata:
      labels:
        app: db2
    spec:
      serviceAccount: db2
      containers:
      - name: db2
        securityContext:
          privileged: true
        image: ibmcom/db2:11.5.5.1
        env:
        - name: LICENSE 
          value: accept 
        - name: DB2INSTANCE 
          value: db2inst1 
        - name: DB2INST1_PASSWORD 
        value: <Your DB2 Password>      
        ports:
        - containerPort: 50000
          name: db2
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /database
          name: db2vol
  volumeClaimTemplates:
  - metadata:
      name: db2vol
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      storageClassName: ibmc-block-gold

I'll create my DB2 resources by running the following command:

oc create -f db2_deploy.yaml

Or, if using kubectl:

kubectl create -f db2_deploy.yaml

With my DB2 pod running, I'll open a SSH session for the pod and switch to the DB2 user:

oc rsh <DB2 Pod>

su - db2inst1

If you are using kubectl, you can use the following command to get a shell into the pod before switching the user:

kubectl exec --stdin --tty <DB2 POD> -- /bin/bash

Then, if prompted, I'll authenticate my session by providing <Your DB2 Password>  defined in the YAML file above.

Logged in as the DB2 user, I am going to make an SQL file which will be used to create the CCM database. I will name this database CCMDB and the file create_ccm_db.sql. Note that any filename will work.

In the create_ccm_db.sql file, I will put the following to create my database:

CREATE DATABASE CCMDB AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768;

Now I'm going to create the database with the file I just made by running the following:

db2 -stvf create_ccm_db.sql

After a few minutes, the CCMDB database has been created. I'm going to take note of the following information from my database configuration which I know I'll need later for CCM installation. You can find the <DB2 LB Cluster IP> by running either of the following commands in the db2 namespace and looking for the cluster IP of the db2-lb service:

oc get services

kubectl get services

    • Vendor of the database (dbType): db2
    • Cluster IP address of the load balancer (dbHost): <DB2 LB Cluster IP>
    • Port for the database load balancer (dbPort): 50000
    • Database User (dbUser): db2inst1
    • Database Name (dbName): CCMDB
    • Database Password: <DB2 Password>

CCM Installation

Configuring Administration

I next need to configure necessary security/administration resources for CCM. I will use the file ibm-sccm-scc.yaml located in the Helm chart at ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration. This will create a Security Context Constraint (SCC) named ibm-sccm-scc:

apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: ibm-sccm-scc 
  labels:
    app: "ibm-sccm-scc"
    app.kubernetes.io/instance: "ibm-sccm"
    app.kubernetes.io/managed-by: "ibm-sccm"
    app.kubernetes.io/name: "ibm-sccm"
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities:
defaultAddCapabilities: null
fsGroup:
  type: MustRunAs
  ranges:
  - min: 1
    max: 4294967294
priority: 0
readOnlyRootFilesystem: false
requiredDropCapabilities:
- ALL
runAsUser:
  type: MustRunAsRange
  uidRangeMin: 1
  uidRangeMax: 4294967294
seLinuxContext:
  type: MustRunAs
seccompProfiles:
- runtime/default
supplementalGroups:
  type: MustRunAs
  ranges:
  - min: 1
    max: 4294967294
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
- nfs

I can then create the SCC with either of the following commands:

oc create -f ibm-sccm-scc.yaml

kubectl create -f ibm-sccm-scc.yaml

I will now create a Service Account to use during my Helm installation. I will name this Service Account ibm-ccm-sa:

oc create sa ibm-ccm-sa

kubectl create sa ibm-ccm-sa

I then add the ibm-sccm-scc SCC to the ibm-ccm-sa Service Account in my ccm-nonprod namespace:

oc adm policy add-scc-to-user ibm-sccm-scc -n ccm-nonprod -z ibm-ccm-sa

KeyStore and TrustStore Files

For the purposes of this blog, I will generate a self-signed certificate using keytool and use it to create my TrustStore and KeyStore. Do not use this configuration in a production environment. Refer to the documents for more information on configuring KeyStore and TrustStore files.

I will create a KeyStore file CCenter.keystore and a TrustStore file CCenter.truststore which both contain a single certificate with alias ccalias120 using the RSA algorithm and having a key size of 4096 bits. I can accomplish all of this by running these commands in order and providing the <KeyStore Password>, <TrustStore Password>, and other information when prompted by keytool:

keytool -genkeypair -alias ccalias120 -keyalg RSA -keysize 4096 -keystore CCenter.keystore

keytool -export -alias ccalias120 -keystore CCenter.keystore -file ccalias120.cer

keytool -import -alias ccalias120 -file ccalias120.cer -keystore CCenter.truststore 

Note the location of the newly created KeyStore and TrustStore files on your local machine.

Secrets

Finally, I will generate the secrets needed for CCM. To create my CCM administration secret, I'll use the ibm-sccm-secret.yaml template given in the Helm chart under the ibm_cloud_pak/pak_extensions/pre-install/secret directory.

Because this is a development environment, I will only specify the credentials necessary to deploy CCM. These are the database password, admin credentials, and the keystore and passwords. The email password, JMS credentials, and user key are therefore excluded, but recommended in a production environment.

Note that the values given in the secret must be base64 encoded. To do this, I will pass in the secrets with the stringData map instead of data map which automatically converts to base64. If using the data map instead, run the following command template from the command line terminal and use the outputs in the secret file:

echo <Value> | base64

apiVersion: v1
kind: Secret
metadata:
  name: ibm-sccm-secret
type: Opaque
stringData:
.ccDBPassword: <DB2 Password>
.adminUserId: <Admin Name>
.adminUserPassword: <Admin Password>
.trustStorePassword: <TrustStore Password>
.keyStorePassword: <KeyStore Password>

Next, I will create the certificate secret which will contain my KeyStore and TrustStore files. I will navigate to where I have my CCenter.keystore and CCenter.truststore files on my local machine and run the either of the following commands to create my certificate secret ibm-sccm-certs-secret:

oc create secret generic ibm-sccm-certs-secret --from-file=keystore=CCenter.keystore --from-file=truststore=CCenter.truststore -n ccm-nonprod

kubectl create secret generic ibm-sccm-certs-secret --from-file=keystore=CCenter.keystore --from-file=truststore=CCenter.truststore -n ccm-nonprod

Configuring and Installing CCM

With all pre-installation tasks completed, I can now create a copy of the provided values.yaml file from the Helm chart. I name this copy override.yaml. Note that I use ibmc-file-gold-gid as the storage class for dynamically provisioned persistent volume with ReadWriteMany as the access mode. This configuration is available to my OpenShift cluster under the IBM Cloud. If you aren't using IBM Cloud, you'll need to use a storage class available to your cluster.

I will be using the copy-dbdriver init container found in the Helm chart's values.yaml file to copy my DB2 database driver into the pod's file system. Refer to the comments in the file if you are using a database other than DB2. Note that although the command in the init container copies the file into /app/conf, the drivers are referenced in ccArgs.dbDrivers under the directory /app/CC/conf.

Also, because my OpenShift cluster's subdomain URL is us-south.containers.appdomain.cloud, I will use this as the postfix for the ingress host.

In override.yaml, I change the following values to meet my specifications:

ccArgs.adminEmailAddress: <Admin Email Address>
ccArgs.ccAdminEmailAddress: <Control Center Admin Email Address>
ccArgs.dbDrivers: '/app/CC/conf/db2jcc4.jar'
ccArgs.dbHost: '<DB2 LB Cluster IP>'
ccArgs.dbInit: 'true'
ccArgs.dbName: CCMDB
ccArgs.dbPort: '50000'
ccArgs.dbType: DB2
ccArgs.dbUser: db2inst1
ccArgs.keyAlias: "ccalias120"
ccArgs.keyStore: "CCenter.keystore"
ccArgs.smtpTLSEnabled: 'false'
ccArgs.trustStore: "CCenter.truststore"

extraInitContainers: 
 - name: "copy-dbdriver"
   repository: "cp.icr.io/cp/ibm-scc/ibmscc_dbdrivers"
   tag: "2024"
   imageSecrets: "ibm-entitlement-key"
   pullPolicy: Always
 command: "cp /ibm/scc/resoures/dbdrivers/db2/luw/v11.5/v4.33.31/db2jcc4.jar /app/conf/"
   digest:
     enabled: false
     value: sha256:a1bd211ef90e446af809db2421abdd0fe60e1ae6d44ef646fe98e97b94fd13ce
   userInput:
     enabled: true

ingress.enabled: true
ingress.host: 'ccm.nonprod.us-south.containers.appdomain.cloud'

license: true

persistentVolumeCCM.storageClassName: ibmc-file-gold-gid
persistentVolumeCCM.useDynamicProvisioning: true

persistentVolumeUserInputs.enabled: false

secret.certsSecretName: 'ibm-sccm-certs-secret'
secret.secretName: 'ibm-sccm-secret'

serviceAccount.create: false
serviceAccount.name: 'ibm-ccm-sa'

timeZone: America/New_York

After saving these changes to override.yaml, I create my Helm release which I will call ibm-sccm. To do this, I'll run the following command from within the Helm chart directory:

helm install ibm-sccm -f override.yaml .

Verification

To verify that I have successfully installed CCM, I will first check the status of the Helm release, service, and pod.

To check the status of my Helm release I can run:

helm status ibm-sccm -n ccm-nonprod

In my output, I see that the status is deployed:

...
NAMESPACE: ibm-sccm
STATUS: deployed
REVISION: 1

I can then check the status of the CCM service and pod by running either of the following sets of commands:

oc describe svc -n ccm-nonprod

oc describe pods -n ccm-nonprod

If using kubectl:

kubectl describe svc -n ccm-nonprod

kubectl describe pods -n ccm-nonprod

Configuring CCM to Monitor SFG

Adding SFG Server to CCM

To configure CCM to monitor my SFG release, I first need to login to the web user interface. To do this, I can obtain the OpenShift route created for me by the Helm release by running the following command:

oc get routes

Next to the route prefixed by {CCM Release Name}-ibm-sccm-web-{CCM Namespace} I see the HOST/PORT of my web route. I'll use this as the URL to access my CCM UI.

From here, I will access the Control Center Launch Page with the button:

And then, clicking on the IBM Sterling Control Center web console option. Here, I will provide by admin credentials to login:

Once logged in to the dashboard, I will navigate to the top of the screen and click Servers and then Add Server on the left column pane:

Here, I can specify the details to add SFG as a server to CCM. I'll name the server SFG Server and leave the description blank as it is optional. I will click Next to go to the Connection configuration.

Under Server Type, I will scroll down and select IBM Sterling B2B Integrator:

I will then select the Node Type as Cluster not through a load balancer, which then allows me to specify my SFG ASI internal ingress hostname, the HTTP protocol, port 80 (the default for SFG HTTP), and my SFG admin credentials:

I then click Test Connection and wait for the success message. Once it succeeds, I click Next to go to the Settings configuration page.

I will ensure now that I have the following options set:

      • Monitor Business Process: Yes
      • Monitor File Gateway Activity: Yes

I will then check the box for Select the protocols you want to monitor which selects all protocols for monitoring:

Once finished with this page, I can continue to click Next, leaving the default options for all subsequent pages until I have configured and confirmed the SFG server.

To verify that the SFG server is properly connected, I can navigate to Server List on the left column pane where I will see my new server listed with the READY Status indicated by a green up arrow:

Monitoring SFG Business Processes and SFTP File Transfers

With SFG connected to CCM and with the configurations given above, activity can be monitored via the Monitor tab on the top of the dashboard.

To monitor completed business processes, I can click on Completed Processes on the left column pane and provide optional search criteria such as a date range and/or my specific server to view completed business processes:

To monitor completed SFG file transfers done through my SFTP Adapter, I can click on Completed File Transfers on the left column pane and provide search criteria such as a date range and/or my specific server to view completed file transfers:

Configuring ITXA and CCM For Monitoring

Setting the ITXA Properties

To send events to CCM, I will first configure my ITXA customer_overrides.properties file which can be found in the itxa-config ConfigMap created during my ITXA Helm install. I obtain the ConfigMap for editing by running either of the following commands from within the project that ITXA is installed in (sfg-itxa-nonprod):

oc get cm itxa-config -o yaml > itxaConfigMap.yaml

kubectl get cm itxa-config -o yaml > itxaConfigMap.yaml

Within this file I find the customer_overrides.properties data field and uncomment the following lines:

event.repository.url=http://{host}:58082/sccwebclient/events
event.repository.username={username}
event.repository.password={password}
event.SystemStatusTimerSeconds=300
event.SystemStatusInstanceName=SPE 
event.SystemStatusLocale=en_US
event.dir=C:/IBM/Standards Processing Engine 9.0.0/ccEventStore
spe.EventProvider=com.ibm.spe.core.events.SPEControlCenterEventProvider
spe.SystemStatusProvider=com.ibm.spe.core.events.SPESystemStatusProvider

I will then replace {host} with the LoadBalancer IP found in the CCM service validation step above, and {username}{password} with the non-base64 encoded admin name and password I specified in the CCM secret.

I will save the itxaConfigMap.yaml file with these changes and apply them to my existing ConfigMap by running either of the following commands:

oc apply -f itxaConfigMap.yaml

kubectl apply -f itxaConfigMap.yaml

Finally, I will restart SFG/B2Bi by deleting the SFG/B2Bi AC,API, and ASI pods (which are managed by StatefulSets and will redeploy once deleted). To do this I can run either of the following sets of commands on all three pods:

oc delete pod {AC Pod}

oc delete pod {API Pod}

oc delete pod {ASI Pod}

...

kubectl delete pod {AC Pod}

kubectl delete pod {API Pod}

kubectl delete pod {ASI Pod}

Configuring ITXA Base Summarizer in CCM

To define the base summarizer in CCM, I will first need to obtain the ITXABaseSummarizer.class file from my ITXA installation. To find where this file is located in my ITXA pod, I will run either of the following commands:

oc exec {ITXA Pod} -- find / | grep ITXABaseSummarizer.class

kubectl exec {ITXA Pod} -- find / | grep ITXABaseSummarizer.class

In my ITXA pod, the file is located at /opt/IBM/spe/ICC/com/ibm/spe/controlcenter/ITXABaseSummarizer.class. To copy this file to my local machine's current directory, I can run either of the following commands:

oc cp {ITXA Pod}:/opt/IBM/spe/ICC/com/ibm/spe/controlcenter/ITXABaseSummarizer.class ./ITXABaseSummarizer.class

kubectl cp {ITXA Pod}:/opt/IBM/spe/ICC/com/ibm/spe/controlcenter/ITXABaseSummarizer.class ./ITXABaseSummarizer.class

Then, from within my CCM project I will copy this file from my local machine into the /conf/classes/com/ibm/spe/controlcenter directory in my CCM pod. In my CCM pod, /conf is located at /app/CC/conf. To copy the file, I will first ensure every subdirectory in the chain classes/com/ibm/spe/controlcenter is made using mkdir commands from within the pod. After doing so, I can copy the file into the pod with either of the following commands:

oc cp ITXABaseSummarizer.class {CCM Pod}:/app/CC/conf/classes/com/ibm/spe/controlcenter/ITXABaseSummarizer.class

kubectl cp ITXABaseSummarizer.class {CCM Pod}:/app/CC/conf/classes/com/ibm/spe/controlcenter/ITXABaseSummarizer.class

I will then restart CCM by deleting the pod (which is managed by a StatefulSet and will redeploy once deleted). To do this I can run either of the following commands:

oc delete pod {CCM Pod}

kubectl delete pod {CCM Pod}

Setting Up a Downed Server Rule

I will now setup a Rule in CCM which will be triggered by a message ID to the server indicating a server is down. This Rule will trigger the basic alert2 action which generates a level 2 (high) level alert.

To do this, I will navigate to "Manage" on the top bar and then "Rules" on the left column. Here I can click the plus (+) button to create a new rule:

I will give this rule the following specification. Note that CCTR034E is the message code for a server being down and alert1 ships with CCM as the generic high alert action:

Name: Server Down

Description: This rule is triggered when a monitored server goes down

Key: Message Id

Operator: Matches

Value: CCTR034E

Action: alert1

I will then click "Save" to create this new rule. This rule will trigger after testing the connection to ITXA because ITXA is a dynamically discovered server.

Validating The Connection with a Test Event

To validate that ITXA is properly configured to send processed events to CCM, I will run a sample ITXA EDI process from within my SFG/B2Bi ASI server pod. Note that ITXA is monitored through dynamic discovery, meaning that ITXA is responsible for sending events to CCM to monitor.

First, I will open an SSH session into my ASI UI server pod which is named sfg-itxa-nonprod-b2bi-asi-server-0 in my sfg-itxa-nonprod namespace. I'll run the following command:

oc rsh sfg-itxa-nonprod-b2bi-asi-server-0

Once in the pod, I will navigate to where the SPE pack script files are located:

cd /opt/IBM/spe/bin

I will now run the spesetup.sh script:

. ./spesetup.sh

After the setup script finishes, I run the setup script for the EDIFACT sample pack setup:

./spesetupsamples-packedi.sh

Once the setup script for the EDIFACT sample pack finishes, I will then run the following command to do a sample envelope process using the x12 standard:

java com.ibm.spe.sample.SPESample api=envelope standard=x12 input=/opt/IBM/spe/examples/edi/x12/envelope/data/poin-multi.txt option=SenderID:MYCOMPANY option=ReceiverID:YOURCOMPANY option=AccepterLookupAlias:850 option=DriverTracking:true output=/opt/IBM/spe/examples/edi/x12/envelope/data/output

Note: option=DriverTracking:true which will allow the process event to be sent to CCM and also the example input/output file data location.

Once finished, I will log into my CCM dashboard and see the ITXA server under Environmental Health:

I will click on the green circle which takes me to the ITXA subsection of the Servers page:

I click on the ibm-b2bi-prod-b2bi-asi-server-0 which shows me server details. I can then click on "Processes" or "Events" tabs to get more information about the ITXA processes and the events from the server:

I can also click on the specific process ID to get more detailed information on the process and the events specific to that process:

Soon after running this ITXA process, the ITXA server will be detected as "down". I can see this through the alert I configured earlier on the dashboard where I see a new High alert:

If I click on "View High Alerts" I can see that my ASI pod which ran the ITXA process is being detected as down, meaning my Rule and alert were properly configured:

Audit logs of CCM and other report information can be found under the "Tools" tab in the top bar:

Configuring IBM Sterling Secure Proxy Configuration Manager For OSA Monitoring

SSP Network Policy

Before SSP can be configured to send data to CCM, I will need to ensure that my SSP namespace is configured to allow egress traffic to the web console port of my CCM installation. By default, the CCM web console listens on port 58082 (this is the same port ITXA is configured to send traffic to).

I create a new network policy in my SSP namespace (ssp-nonprod) with the following YAML definition in a file named sspcm_osa_network_policy.yaml:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: 'ccm-osa-policy'
  namespace: ssp-nonprod
spec:
  podSelector:
    matchLabels:
    release: my-ssp-cm-release
  egress:
    - ports:
        - protocol: TCP
        port: 58082
policyTypes:
    - Egress

I create the network policy by running:

oc create -f sspcm_osa_network_policy.yaml

Or, if using kubectl:

kubectl create -f sspcm_osa_network_policy.yaml

Enabling SSP OSA Node

The first step in configuring OSA on SSP is to enable the OSA node in my SSP CM Helm deployment. In my override.yaml file for my SSP CM deployment mentioned in my SSP blog I will change the following to true:

cmArgs.osaNodeEnable: true

I will then perform a Helm upgrade on my SSP CM in its namespace (for me this is ssp-nonprod) with this change in place and wait for the new SSP CM pod to spin up:

helm install my-ssp-cm-release -f override.yaml

Configuring SSP OSA Monitoring

To configure SSP OSA monitoring, I will some information from my CCM release. The {host} is the LoadBalancer IP found in the CCM service validation step above. CCM User {username}{password} are from the non-base64 encoded admin name and password I specified in the CCM secret.

CCM User ID: {username}

CCM User Password: {username}

ICC OSA URL Path: /sccwebclient/events (default value)

Event Processor Host/IP: {host}

Event Processor Port: 58082

To configure the OSA monitoring I will first log in to my SSP user interface from my original blog Deploying Sterling Secure Proxy CM/Engine on Red Hat OpenShift Using Certified Containers and Connecting to Sterling B2Bi SFTP Adapter. After logging in, I will navigate to System > System Settings > CMSystemSettings > ICC OSA monitoring.

I click the checkbox titled "Enable ICC OSA monitoring" and fill out the information I gathered above. I will leave the option for secure connection unchecked. 

Under the subsection "ICC OSA EP hosts and ports" I will click New and specify the host:port combination above. I'll then click OK to add the CCM EP host:port.

My final configuration is as follows:

I will then click Save to finish the configuration and enable OSA monitoring.

Validating OSA Through CCM

I can now log back into CCM and find the SSP Engine, Server, and Adapter are all being monitored:

For more information, I can navigate to the SSP server under ServersAll ServersSsp-osa. There is more information about my server here for Adapters, Engines, Process, Alerts, and more:

Glossary

Helm

Helm allows for automating the containerized deployments and upgrades when used in conjunction with a provided and configurable YAML file. This YAML file is used to define relative configuration for the charts. The key here is to ensure that the file properly defines the necessary deployment configurations that fit your needs. For issues regarding Helm, refer to the Helm documentation on how to install it and the commands available to you via the Helm CLI. 

Resources

Helm Charts

CCM Version 6.3.1

Installation Document References

Deploying IBM CCM with Helm Charts

Configuring CCM KeyStore and TrustStore Files

Validating CCM Deployment

Configuring B2Bi/SFG for Monitoring by CCM

Configuring an IBM Transformation Extender Advanced and IBM Control Center Integration

Acronyms

  • OCP: OpenShift Container Platform
  • OSA: Open Systems Adapter
  • SCCM/CCM: IBM Sterling Control Center Monitor
  • SSP (CM): Sterling Secure Proxy (Configuration Manager)
  • SFG: IBM Sterling File Gateway
  • B2B(i): Business to Business (Integrator)
  • SFTP: Secure File Transfer Protocol
  • SCC: Security Context Constraint
0 comments
11 views

Permalink