Decision Management (ODM, ADS)

 View Only

How do I install ODM Bronze topology with CP4BA 24.0.0?

By Johanne Sebaux posted Wed July 17, 2024 06:51 AM

  

Library PDF Version

Target audience: ODM user with ODM Administrator role
Estimated duration: 120 minutes.

  

This article is part of an article series around Operational Decision Manager (ODM) topologies in context of Cloud Pak for Business Automation (CP4BA).  For more information about ODM environments and the topologies, see CP4BA ODM topologies on OpenShift.

1. Introduction

This document aims to describe how to make an ODM Bronze topology deployment on OpenShift as a component of CP4BA 24.0.0.

ODM Bronze topology is an enterprise deployment of ODM in a single namespace of a cluster. It corresponds to the default production pattern deployment.

§  Schema of an ODM bronze topology (fig. 1)

Bronze topology is best suited for prototypes or applications with low production constraints (Small, no HA). It can also be seen as the baseline for Silver and Gold topologies and will be referenced as such in other articles.

2. Installation

Prior to installation, go through Planning for a CP4BA multi-pattern production deployment guide to understand what you need, what options you have, storage classes, security, permissions, high availability, license entitlements, and how you can measure the usage of your deployments. 

Deploying ODM production pattern comes with some choices which can lead to different installation instructions. In Review your options, there are several production deployment guides for CP4BA 24.0.0. In this article, we focus on CP4BA single-pattern production deployment on ROKS classic and OCP by following installation guides in PDF to guide you in implementing your deployment in an OpenShift cluster. 

As a prerequisite, follow the various topics in Option 1: Preparing your cluster for an online deployment to set up your cluster before you create the ODM deployment in a specified namespace. The next section helps you to go through this preparation.

NB : in case of air gap environment, see Option 2: Preparing your cluster for an air gapped (offline) deployment.

2.1 Cluster preparation steps

The cluster preparation (online or offline) procedure is summarized in PDF file: Setting up CP4BA on an online cluster and Setting up CP4BA on an offline cluster.

The table below goes through the online preparation procedure step-by-step.

Topic

Awaited action and results

Preparing a client to connect to the cluster

This is an action. 

You must make sure that the client that you intend to use to connect to the OpenShift cluster has all the necessary tools. If necessary, please proceed to the installation.

Preparing your cluster

This is an action. 

Before you install any of the automation containers, you must prepare a cluster for the patterns that you want to use.

Preparing a namespace for the Cloud Pak operator

This is an action. 

All instances of an operator need a namespace whether it is on a private cloud (OCP) or on IBM Cloud® Public (ROKS). Depending on your platform type, either prepare the namespace on OCP or on ROKS.

An example to create a namespace (bronze) for ODM Bronze topology:

oc new-project bronze

Now using project "bronze" on server "https://api.<my_company_ocp_cluster>.com:6443".

Getting access to container images

This is a decision to be made. 

To get access to the container images from the IBM entitled registry, you must have a key to pull the images from the IBM registry.

Get your access key to the Cloud Pak container images here: My IBM Container Software Library.

Choose your deployment option between the following three :

-       Global pull secret

-       Cluster scope pull secret

-       Namespace pull secret

Setting up the cluster

This is an action. 

If you plan to use the Form view in Operator Hub, then set up the cluster with the OpenShift console.

You can also follow the procedure described in this section to set up the cluster by running Shell scripts:

1.     Install IBM License Service and IBM Certificate Manager by running a script,

2.     Setup cluster:

cd cert-kubernetes/scripts

./cp4a-clusteradmin-setup.sh" *

Nota : the script above comes from CASE package

At this stage, you have been through the checklist to prepare your cluster, all the available storage class names are displayed along with the infrastructure node name.  

* : Sample output of the ./cp4a-clusteradmin-setup.sh script execution with default values and an Entitlement Key.

./cp4a-clusteradmin-setup.sh

[INFO] Setting up the cluster for IBM Cloud Pak for Business Automation

Do you wish setup your cluster for a online based CP4BA deployment or for a airgap/offline based CP4BA deployment:

1) Online

2) Offline/Airgap

Enter a valid option [1 to 2]: 1

Select the cloud platform to deploy:

1) RedHat OpenShift Kubernetes Service (ROKS) - Public Cloud

2) OpenShift Container Platform (OCP) - Private Cloud

3) Other (Certified Kubernetes Cloud Platform / CNCF)

Enter a valid option [1 to 3]: 2

What type of deployment is being performed?

ATTENTION: The BAI standalone only supports "Production" deployment type.

1) Starter

2) Production

Enter a valid option [1 to 2]: 2

[NOTES] If you are planning to enable FIPS for CP4BA deployment, this script can perform a check on the OCP cluster to ensure the compute nodes have FIPS enabled.

Do you want to proceed with this check? (Yes/No, default: No): No

[NOTES] You can install the CP4BA deployment as either a private catalog (namespace scope) or the global catalog namespace (GCN). The private option uses the same target namespace of the CP4BA deployment, the GCN uses the openshift-marketplace namespace.

Do you want to deploy CP4BA using private catalog (recommended)? (Yes/No, default: Yes): Yes

[NOTES] CP4BA deployment supports separation of operators and operands, the script can deploy CP4BA operators and it's capabilities in different projects.

Do you want to deploy CP4BA as separation of operators and operands? (Yes/No, default: No): No

Where do you want to deploy Cloud Pak for Business Automation?

Enter the name for a new project or an existing project (namespace): bronze

The Cloud Pak for Business Automation Operator (Pod, CSV, Subscription) not found in cluster

Continue....

Project "bronze" already exists! Continue...

[INFO] Creating project "ibm-cert-manager" for IBM Cert Manager operator catalog.

Project "ibm-cert-manager" already exists! Continue...

[] Created project "ibm-cert-manager" for IBM Cert Manager operator catalog.

[INFO] Creating project "ibm-licensing" for IBM Licensing operator catalog.

Project "ibm-licensing" already exists! Continue...

[] Created project "ibm-licensing" for IBM Licensing operator catalog.

[INFO] Creating ibm-cp4ba-common-config configMap for this CP4BA deployment in the project "bronze"

[] Created ibm-cp4ba-common-config configMap for this CP4BA deployment in the project "bronze".

This script prepares the OLM for the deployment of some Cloud Pak for Business Automation capabilities

Here are the existing users on this cluster:

1) Cluster Admin

2) <my_admin>

Enter an existing username in your cluster, valid option [1 to 2], non-admin is suggested: 2

[INFO] Creating cp4ba-fips-status configMap in the project "bronze"

[] Created cp4ba-fips-status configMap in the project "bronze".

Follow the instructions on how to get your Entitlement Key:

https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/24.0.0?topic=deployment-getting-access-images-from-public-entitled-registry

Do you have a Cloud Pak for Business Automation Entitlement Registry key (Yes/No, default: No): Yes

Enter your Entitlement Registry key:

Verifying the Entitlement Registry key...

Login Succeeded!

Entitlement Registry key is valid.

The existing storage classes in the cluster:

NAME                            PROVISIONER                         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

managed-nfs-storage (default)   redhat-emea-ssa-team/hetzner-ocp4   Delete          Immediate           false                  112m

Creating docker-registry secret for Entitlement Registry key in project bronze...

secret/ibm-entitlement-key created

Done

[INFO] Applying the latest IBM CP4BA Operator catalog source...

[] IBM CP4BA Operator catalog source Updated!

[INFO] Starting to install IBM Cert Manager and IBM Licensing Operator ...

[] ibm-licensing-catalog/ibm-cert-manager-catalog pod ready!

All arguments passed into the setup_singleton.sh: --enable-licensing --license-accept --enable-private-catalog --yq <my_path>/cert-kubernetes/scripts/cpfs/yq/amd64/yq -c v4.2

[] oc command available

[] <my_path>/cert-kubernetes/scripts/cpfs/yq/amd64/yq command available

[] oc command logged in as <my_admin>

[] Channel v4.2 is valid

[INFO] No ibm-common-service-operator found on the cluster, skipping delegation check

[] Flag --enable-private-catalog is enabled, please make sure the CatalogSource is deployed in the same namespace as operator

# Check migrating LTSR ibm-licensing-operator

[INFO] There is no LTSR ibm-licensing-operator to migrate, skipping

# Check migrating and deactivating LTSR ibm-cert-manager-operator

[INFO] LTSR ibm-cert-manager-operator already deactivated, skipping

# Installing cert-manager

[] There is a cert-manager Subscription already

[] There is a cert-manager-webhook pod Running, so most likely another cert-manager is already installed

[INFO] Continue to upgrade check

[] Cluster has a RedHat cert-manager or Helm cert-manager, skipping

# Validate CatalogSource for operator ibm-licensing-operator-app in ibm-licensing namespace

[] CatalogSource ibm-licensing-catalog from ibm-licensing CatalogSourceNamespace is available for ibm-licensing-operator-app in ibm-licensing namespace

# Installing licensing

[] There is an ibm-licensing-operator-app Subscription already, so will upgrade it

# Checking whether Namespace ibm-licensing exist...

[] Namespace ibm-licensing already exists. Skip creating

# Checking whether OperatorGroup in ibm-licensing exist...

[] OperatorGroup already exists in ibm-licensing. Skip creating

# Updating ibm-licensing-operator-app in namesapce ibm-licensing...

[INFO] v4.2 is equal to v4.2

[INFO] catalogsource ibm-licensing-catalog is the same as ibm-licensing-catalog

[INFO] ibm-licensing-operator-app has already updated channel v4.2 and catalogsource ibm-licensing-catalog in the subscription.

subscription.operators.coreos.com/ibm-licensing-operator-app configured

[] Successfully patched subscription ibm-licensing-operator-app in ibm-licensing

[INFO] Waiting for operator ibm-licensing-operator-app to be upgraded

[] Operator ibm-licensing-operator-app is upgraded to latest version in channel v4.2

[INFO] Waiting for operator ibm-licensing-operator-app CSV in namespace ibm-licensing to be bound to Subscription

[] Operator ibm-licensing-operator-app CSV in namespace ibm-licensing is bound to Subscription

[INFO] Waiting for operator ibm-licensing-operator in namespace ibm-licensing to be made available

[] Operator ibm-licensing-operator in namespace ibm-licensing is available

[INFO] Waiting for ibmlicensing instance to be present.

[] ibmlicensing instance present

# Accepting license for ibmlicensing instance in namespace ...

[] License accepted for ibmlicensing instance

[INFO] Checking cert manager readiness.

[INFO] Waiting for pod cert-manager-webhook to be running ...

[] Pod cert-manager-webhook is running.

#  Smoke test for Cert Manager existence...

[INFO] Creating following issuer:

apiVersion: cert-manager.io/v1

kind: Issuer

metadata:

  name: test-issuer

  namespace: cert-manager

spec:

  selfSigned: {}

issuer.cert-manager.io/test-issuer created

[INFO] Creating following certificate:

apiVersion: cert-manager.io/v1

kind: Certificate

metadata:

  name: test-certificate

  namespace: cert-manager

spec:

  commonName: test-certificate

  issuerRef:

    kind: Issuer

    name: test-issuer

  secretName: test-certificate-secret

certificate.cert-manager.io/test-certificate created

[INFO] Waiting for Issuer test-issuer in namespace cert-manager to be Ready

[] Issuer test-issuer in namespace cert-manager is Ready

[INFO] Waiting for Certificate test-certificate in namespace cert-manager to be Ready

[] Certificate test-certificate in namespace cert-manager is Ready

[INFO] Deleting test-issuer Issuer ...

issuer.cert-manager.io "test-issuer" deleted

[INFO] Deleting test-certificate Certificate ...

certificate.cert-manager.io "test-certificate" deleted

[INFO] Deleting 22382secret_name Secret ...

secret "test-certificate-secret" deleted

[] Cert manager is ready.

[INFO] SETUP_SINGLETON_STATUS : 0

[INFO] setup_singleton.sh script executed successfully; hence there is a cert-manager present on the cluster

Waiting for the Cloud Pak for Business Automation operator to be ready. This might take a few minutes...

ibm-cp4a-operator-catalog         ibm-cp4a-operator                       grpc   IBM         3m15s

Found existing ibm operator catalog source, updating it

catalogsource.operators.coreos.com/ibm-cp4a-operator-catalog unchanged

catalogsource.operators.coreos.com/ibm-opencontent-flink unchanged

catalogsource.operators.coreos.com/ibm-cs-opensearch-catalog unchanged

catalogsource.operators.coreos.com/ibm-cert-manager-catalog unchanged

catalogsource.operators.coreos.com/ibm-licensing-catalog unchanged

catalogsource.operators.coreos.com/ibm-cs-install-catalog-v4-6-4 unchanged

catalogsource.operators.coreos.com/bts-operator unchanged

catalogsource.operators.coreos.com/ibm-iam-operator-catalog unchanged

catalogsource.operators.coreos.com/ibm-zen-operator-catalog unchanged

catalogsource.operators.coreos.com/ibm-events-operator-catalog unchanged

catalogsource.operators.coreos.com/cloud-native-postgresql-catalog unchanged

catalogsource.operators.coreos.com/ibm-fncm-operator-catalog unchanged

IBM Operator Catalog source updated!

[INFO] Waiting for CP4BA Operator Catalog pod initialization

[INFO] CP4BA Operator Catalog is running...

ibm-cp4a-operator-catalog-rpbl2

operatorgroup.operators.coreos.com/ibm-cp4a-operator-catalog-group created

CP4BA Operator Group Created!

subscription.operators.coreos.com/ibm-cp4a-operator-catalog-subscription created

CP4BA Operator Subscription Created!

[INFO] Waiting for CP4BA operator pod initialization

..............................

CP4BA operator is running...

ibm-cp4a-operator-6b67dd4f5-ptf2q

[INFO] Waiting for CP4BA Content operator pod initialization

..............................

CP4BA Content operator is running...

ibm-content-operator- 77975dd598-7mhvd

Adding the user <my_admin> to the ibm-cp4a-operator role...Done!

Label the default namespace to allow network policies to open traffic to the ingress controller using a namespaceSelector...namespace/default labeled

Done

Storage classes are needed to run the deployment script. For the Starter deployment scenario, you may use one (1) storage class.  For an Production deployment, the deployment script will ask for three (3) storage classes to meet the slow, medium, and fast storage for the configuration of CP4BA components.  If you don't have three (3) storage classes, you can use the same one for slow, medium, or fast.  Note that you can get the existing storage class(es) in the environment by running the following command: oc get storageclass. Take note of the storage classes that you want to use for deployment.

NAME                            PROVISIONER                         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

<my-storage> (default)   redhat-emea-ssa-team/hetzner-ocp4   Delete          Immediate           false                  119m

At this stage, you have a cluster and a project namespace bronze ready for your ODM deployment with all operators up and running.

OpenShift administration console view of the installed operators in namespace bronze (fig. 2)

2.2 Following the installation guide for CP4BA 24.0.0 ODM

The next stage is fully explained in the installation PDF guide: CP4BA single-pattern production deployment on ROKS classic and OCP by following installation guides in PDF.

It helps you to architecture, execute and validate an ODM Bronze topology deployment.  The following topics provide you with an overview of the instructions to prepare and install the deployment.

Topic

Awaited action and results

Preparing databases and secrets for your chosen capabilities by running a script

This is an action.
Use the "
cert-kubernetes/scripts/cp4a-prerequisites.sh" script to:

1.     [Run script] Create the property files (DB/LDAP/user)

./cp4a-prerequisites.sh -m property -n bronze

2.     [Manual Action] Modify the property files by replacing <Required> values in the property files under <my_path>/cert-kubernetes/scripts/cp4ba-prerequisites/project/bronze/propertyfile

3.     [Run script] If you chose an external DB, generate the DB SQL statement file, and the YAML template for the secrets of your chosen capability (ODM)

./cp4a-prerequisites.sh -m generate -n bronze

4.     [Manual Action][Optional] If you chose an external DB, Run the DB scripts against your database servers

5.     [Manual Action] Create the secrets in your project namespace

./cp4ba-prerequisites/project/bronze/create_secret.sh

6.     [Run script] Validate your storage, [Optional] DB / LDAP connections and secrets

./cp4a-prerequisites.sh -m validate -n bronze

Preparing to install Operational Decision Manager

This is a decision to be made.

For a Bronze installation, we recommend to include all ODM components in the same project namespace.

Creating a production deployment

This is an action

You follow the instructions to generate an ODM Bronze topology CR YAML file by running the deployment script:

"cd cert-kubernetes/scripts

./cp4a-deployment.sh -n bronze

A custom resource file is created <my_path>/cert-kubernetes/scripts/generated-cr/project/bronze/ibm_cp4a_cr_final.yaml.

In the generated CR YAML file, check the custom resource parameter values which have been filled in for you by the script:

- Check the data source,

- Check the LDAP configuration,

- Check the ODM configuration,

as explained in « Checking and completing your custom resource » section.

Some modifications can be done to the generated CR YAML file. Use the following table to help you identify the customizable parameters.

Action

Parameter

New value

Replace

metadata.name

Add a meaningful name which will is the name of your ICP4Cluster instance.

e.g. bronze

Delete

spec.shared_configuration.sc_iam.default_admin_username

Delete

spec.shared_configuration. sc_drivers_url

Having modified the custom resource parameters, proceed with the deployment as explained in « Deploying the custom resource you created with the deployment script ». At this stage, the ICP4Cluster instance that you named bronze is created. After a couple of reconcile loops of the CP4BA operator, you can verify the deployment.

Some basic tuning can be done on the foundation layer to maximize ODM capabilities. Follow the instruction below.

Update

oc patch zenservice iaf-zen-cpdservice --type=json -p '[{ "op": "replace", "path": "/spec/scaleConfig", "value": "<size>" }]' ;;

It is recommended that you set the IBM Cloud Platform UI (Zen) service to the same size as Cloud Pak for Business Automation. The possible values are small, medium, and large.

Update

oc patch CommonService common-service -n $NAMESPACE --type=json -p '[{ "op": "replace", "path": "/spec/size", "value": "small" }]' ;;

It is recommended that you set the IBM Common Services to small as it has no impacts on ODM capabilities.

3. Validation

To ensure that the environment works correctly at CP4BA level, follow the steps in “Validating your production deployment”.  Additional validations can be done at ODM level using Validate your ODM topology - 24.0.0.  To review the installed ODM services and also install Rule Designer, see “Completing post-installation tasks for Operational Decision Manager”.

Lastly, here is a sample of CR YAML file allowing an ODM Bronze topology with EDB PostGres internal database and Active Directory LDAP:

apiVersion: icp4a.ibm.com/v1

kind: ICP4ACluster

metadata:

  name: bronze

  labels:

    app.kubernetes.io/instance: ibm-dba

    app.kubernetes.io/managed-by: ibm-dba

    app.kubernetes.io/name: ibm-dba

    release: 24.0.0

  namespace: bronze

spec:

  appVersion: 24.0.0

  ibm_license: accept

  shared_configuration:

    sc_deployment_license: "production"

    sc_deployment_context: "CP4A"

    sc_image_repository: cp.icr.io

    root_ca_secret: icp4a-root-ca

    sc_deployment_patterns: "foundation,decisions"

    sc_optional_components: "decisionCenter,decisionRunner,decisionServerRuntime"

    sc_deployment_type: "Production"

    sc_deployment_platform: "OCP"

    sc_deployment_profile_size: "small"

    sc_ingress_enable: false

    trusted_certificate_list: []

    storage_configuration:

      sc_slow_file_storage_classname: "<my-storage>"

      sc_medium_file_storage_classname: "<my-storage>"

      sc_fast_file_storage_classname: "<my-storage>"

      sc_block_storage_classname: "<my-storage>"

    enable_fips: false

    sc_is_multiple_az: false

    sc_egress_configuration:

      sc_restricted_internet_access: false

    image_pull_secrets:

    - ibm-entitlement-key    

  ## The beginning section of LDAP configuration for CP4A

  ldap_configuration:

    lc_selected_ldap_type: Microsoft Active Directory

    lc_ldap_server: *****

    lc_ldap_port: '***'

    lc_bind_secret: topology-ad-ldap-bind-secret

    lc_ldap_base_dn: *****

    lc_ldap_ssl_enabled: true

    lc_ldap_ssl_secret_name: topology-ad-ldap-ssl-cert

    lc_ldap_user_name_attribute: *****

    lc_ldap_user_display_name_attr: cn

    lc_ldap_group_base_dn: *****

    lc_ldap_group_name_attribute: *:cn

    lc_ldap_group_display_name_attr: cn

    lc_ldap_group_membership_search_filter: *****

    lc_ldap_group_member_id_map: *****

    ad:

      lc_ad_gc_host: *****

      lc_ad_gc_port: '***'

    tds:

      lc_user_filter: "(&(cn=%v)(objectclass=person))"

      lc_group_filter: "(&(cn=%v)(|(objectclass=groupofnames)(objectclass=groupofuniquenames)(objectclass=groupofurls)))"

## The beginning section of database configuration for CP4A

  datasource_configuration:

    dc_ssl_enabled: false

    dc_icn_datasource:

      dc_use_postgres: true

      dc_database_type: postgresql

      dc_common_icn_datasource_name: "ECMClientDS"

      database_servername: "postgres-cp4ba-rw.{{ meta.namespace }}.svc"

      database_port: "5432"

      database_name: icndb

      database_ssl_secret_name: "{{ meta.name }}-pg-client-cert-secret"

      dc_oracle_icn_jdbc_url: ""

      dc_hadr_validation_timeout: 15

      dc_hadr_standby_servername: ""

      dc_hadr_standby_port: ""

      dc_hadr_retry_interval_for_client_reroute: 15

      dc_hadr_max_retries_for_client_reroute: 3

    dc_odm_datasource:

      dc_database_type: postgresql

      database_servername: "postgres-cp4ba-rw.{{ meta.namespace }}.svc"

      dc_common_database_port: "5432"

      dc_common_database_name: "odmdb"

      dc_common_database_instance_secret: "ibm-odm-db-secret"

      dc_common_ssl_enabled: false

      dc_ssl_secret_name: ""

      dc_use_postgres: true

      ########################################################################

      ########      IBM Operational Decision Manager configuration    ########

      ########################################################################

  odm_configuration:

    decisionCenter:

      enabled: true

    decisionServerRuntime: