Sterling B2B Integration

 View Only

IBM Sterling B2B Integrator Installation on Red Hat OpenShift Certified Container Platform: External Purge

By Manisha Khond posted Tue February 14, 2023 08:42 PM

  

Author: Manisha Khond

Overview

Sterling B2B Integrator uses a set of components to move business process data that has reached it’s specified lifespan out of the live database.

These components include:

Index Business Process service

Backup Business Process service

Purge service

Data that has been removed using the Backup Business Process component can later be restored to the same version using the Restore Business Process component.

Business process data written to the database is determined by the persistence level specified in the Business Process Manager or in the noapp.properties file. Decreasing the persistence level increases the business process performance at the cost of full-tracking for each step of the business process.

Business processes are eligible for archiving as soon as their archive flag is set by the Index Business Process service. You do not have to wait until a business process exceeds its lifespan before archiving it. After the archive flag is set, the business process is then archived the next time the Backup Business Process service is run either by schedule or manually. The data is still available in the system because it cannot be purged (with the Purge service) until it has exceeded its lifespan.

The Index Business Process service runs a process that looks for archiving parameters defined in the business process. When the process finds these parameters, the process creates summary information and writes the information to a table in the database.

The Backup Business Process service runs based on how you have configured your archive settings. The Backup Business Process service retains the business process and its related data to a file system as Java™ serialized objects for backup or purging. The Purge service then runs and removes data from the database, file system, or both as specified in the archive settings.

The archive active business process instances (for example, halting, halted or interrupted) can not be archived. Only data for completed or terminated business processes that have been indexed can be archived. You can schedule the intervals at which you want to index, archive, and purge the contents of the database. You can also define the lifespan, or the length of time, in days and hours, to retain the data in the live database tables. This is the length of time until the business process instance expires.

There are 2 types of Purge processes available, Internal Purge and External Purge.

Internal Purge is the default purge process, which is automated.

External purge process can be run manually from a command line in case of an installation using IBM Installation Manager. It runs as a separate Kubernetes job in case of installation using Certified Containers.

Note that in order to run External Purge, Internal Purge must be disabled.

This blog details the process to install External Purge on Red Hat OpenShift Certified Container platform.

What is Red Hat OpenShift Container Platform?

Red Hat® OpenShift® Container Platform is a consistent hybrid cloud foundation for building and scaling containerized applications.

For more information, refer https://www.redhat.com/en/technologies/cloud-computing/openshift/container-platform

What is Helm?

Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

For more information, please refer https://helm.sh/

Before install:

External Purge installation requires downloading External Purge image, configuring helm chart. The steps assume that you have downloaded all the images, helm chart.

Before starting the installation of External Purge, you have to configure the parameters in the values.yaml in the Helm chart. By default, the purge settings are set with default values.

Example: Below are the External Purge settings for B2BI 6.1.0.3.

purge:

  enabled: true

  image:

    repository: "cp.icr.io/cp/ibm-b2bi/b2bi-purge"

  # Provide the tag value in double quotes

    tag: "6.1.0.3"

    digest: sha256:314ff91441d218032d667fa56435b132960a7c6fb20bbd60d35eb0a56af25672

    pullPolicy: IfNotPresent

    pullSecret: ""

  # Provide a schedule for the purge job as a cron expression. For example "0 0 * * *" will run the purge job at 00:00 every day

  schedule:

  startingDeadlineSeconds: 60

  activeDeadlineSeconds: 3600

  concurrencyPolicy: Forbid

  suspend: false

  successfulJobsHistoryLimit: 3

  failedJobsHistoryLimit: 1

  env:

    jvmOptions:

    #Refer to global env.extraEnvs for sample values

    extraEnvs: []

  resources:

    # We usually recommend not to specify default resources and to leave this as a conscious

    # choice for the user. This also increases chances charts run on environments with little

    # resources, such as Minikube. If you do want to specify resources, uncomment the following

    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.

    limits:

      cpu: 500m

      memory: 1Gi

    requests:

      cpu: 100m

      memory: 500Mi

  nodeAffinity:

    requiredDuringSchedulingIgnoredDuringExecution: []

    preferredDuringSchedulingIgnoredDuringExecution: []

Configure External Purge settings:

This section explains the fields in the values.yaml for External Purge and how to override them or should be set to default value.

Field Explanation
enabled   

Set to true to enable External Purge

repository Name of the tagged image that is pushed to the repository
image.tag Tag of the image that is pushed to the repository
image.digest Digest of the image
image.pullPolicy Image Pull Policy. Set to default value
image.pullSecret Specify image pull secret in OpenShift
schedule

This is the crontab expression to execute External Purge on schedule.

Examples:

"*/30 * * * *"     Execute the purge schedule every 30 mins
"*/1 * * * *"       Run once a minute
"0 0 * * *"         Run once a day at midnight
"0 * * * *"         Run once an hour at the beginning of the hour
"0 0 * * *"        Run once a day

startingDeadlineSeconds This field defines a deadline (in whole seconds) for starting the Job, if that Job misses its scheduled time for any reason. After missing the deadline, the CronJob skips that instance of the Job (future occurrences are still scheduled). For Jobs that miss their configured deadline, Kubernetes treats them as failed Jobs. If you don't specify startingDeadlineSeconds for a CronJob, the Job occurrences have no deadline.

activeDeadlineSeconds

Set it to default value

concurrencyPolicy

It specifies how to treat concurrent executions of a job that is created by this CronJob. Below values can be set. The default is Forbid. Keep the default setting for External Purge.

Allow (default): The CronJob allows concurrently running jobs

Forbid: The CronJob does not allow concurrent runs; if it is time for a new job run and the previous job run hasn't finished yet, the CronJob skips the new job run

Replace: If it is time for a new job run and the previous job run hasn't finished yet, the CronJob replaces the currently running job run with a new job run

suspend

You can suspend execution of Jobs for a CronJob, by setting the suspend field to true. The field defaults to false. Set the value to default setting.

This setting does not affect Jobs that the CronJob has already started. 

If you do set that field to true, all subsequent executions are suspended (they remain scheduled, but the CronJob controller does not start the Jobs to run the tasks) until you unsuspend the CronJob.

successfulJobsHistoryLimit, failedJobsHistoryLimit

The successfulJobsHistoryLimit and failedJobsHistoryLimit fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. These fields should be set to default values.

env.jvmOptions 

Set this field if you want to modify default jvmOptions

resources

You can specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).

When you specify the resource request for containers in a Pod, the kube-scheduler uses this information to decide which node to place the Pod on.

When you specify a resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set.

The kubelet also reserves at least the request amount of that system resource specifically for that container to use.

Set the resources to default settings.

nodeAffinity

Node affinity allow you to constrain which nodes your Pod can be scheduled on based on node labels. There are two types of node affinity:

requiredDuringSchedulingIgnoredDuringExecution: The scheduler can't schedule the Pod unless the rule is met. This functions like nodeSelector, but with a more expressive syntax.

preferredDuringSchedulingIgnoredDuringExecution: The scheduler tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the Pod.

Unless you want to schedule External Purge pod run based upon certain rules, do not set nodeAffinity.

Installation

Steps to install External Purge on the OpenShift platform when the B2BI install already exists:

  • Download the External Purge image, load, tag and push it to the OpenShift internal Registry.
  • Modify override.yaml (values.yaml) to set purge.enabled to true.
  • Configure the External Purge Settings in override.yaml (values.yaml).
  • In the Dashboard, navigate to Deployment -> Schedule and make sure to disable PurgeService.
  • Using customization UI, add:
    resourceMonitor.ScheduleMonitor.propertyValue.1=
    SELECT STATUS, SERVICENAME FROM SCHEDULE WHERE SERVICENAME IN ('BackupService','IndexBusinessProcessService','AssociateBPsToDocs','BPRecovery','BPLinkagePurgeService')
  • Install the modified helm chart via helm upgrade. This will create a new External Purge pod.
  • To verify if the External Purge is running:
    Make sure that the External Purge pod is running.
    When External Purge starts, it creates 2 locks hpp.Purge, PURGE_SERVICE. Check if these locks exists via Dashboard Operations -> Lock Manager.
  • The external Purge logs are created in <mapped volume folder>/logs/extPurge/PURGE (extpurge.log.*).

Steps to install External Purge on the OpenShift platform with new B2BI install

  • Download the helm chart corresponding to the version of B2BI you want to install.
  • Download all the images (B2BI, External Purge), load, tag and publish to internal registry.
  • Configure the helm chart for the new installation.
  • To install External Purge, set purge.enabled to true in overrides.yaml (values.yaml).
  • Install B2BI and External Purge using helm install.
  • After the installation is completed, make sure that all the pods are running including External Purge pod.
  • In the Dashboard, navigate to Deployment -> Schedules and make sure that PurgeService is disabled.
  • Using customization UI, add:
    resourceMonitor.ScheduleMonitor.propertyValue.1=
    SELECT STATUS, SERVICENAME FROM SCHEDULE WHERE SERVICENAME IN ('BackupService','IndexBusinessProcessService','AssociateBPsToDocs','BPRecovery','BPLinkagePurgeService')
  • When External Purge starts, it creates 2 locks hpp.Purge, PURGE_SERVICE. Check if these locks exists via Dashboard Operations -> Lock Manager
  • The external Purge logs are created in <mapped volume folder>/logs/extPurge/PURGE (extpurge.log.*).

Upgrade and maintenance of External Purge

When B2BI is upgraded or patched up, the External Purge should be updated. This involves downloading External Purge image, load the image, tag and push the image to the registry. The Helm chart overrides.yaml (values.yaml) to be updated with repository and tag details. The new Helm chart to be installed with helm upgrade.

Disable External Purge

If you do not want to use External Purge and switch to Internal Purge, modify overrides.yaml (values.yaml) to set purge.enabled to false and then run helm upgrade. 

In the Dashboard, navigate to Deployment -> Schedule and make sure to enable PurgeService.




#SupplyChain
#Highlights
#Highlights-home
#B2BIntegration
0 comments
571 views

Permalink