Decision Management (ODM, ADS)

Decision Management (ODM, ADS)

Connect with experts and peers to elevate technical expertise, solve problems and share insights

 View Only

How do I install ODM 8.11.1.0 Gold topology on Certified Kubernetes?

By Pierre-Andre Paumelle posted Fri April 07, 2023 10:05 AM

  

                      By Sia Sin Tay 
                      By Nicolas Peulvast
                      By Johanne Sebaux
                      By Anthony Damiano

PDF version available at: https://community.ibm.com/community/user/automation/viewdocument/how-do-i-install-odm-81110-gold?CommunityKey=c0005a22-520b-4181-bfad-feffd8bdc022&tab=librarydocuments

An example of Gold Topology deployed on a private RedHat OpenShift Container Platform (OCP) and a public Amazon Elastic Kubernetes Service (EKS).

This article is part of an article series around Operational Decision Manager (ODM) on Certified Kubernetes. For more information about ODM environments and the topologies, see our ODM topologies blog entries.

. 231. Introduction

This document aims to give an example on how to install this topology using a private RedHat OpenShift Container Platform (OCP) cluster and a public Amazon Elastic Kubernetes Service (EKS) cluster.

We tried to cover as many types of installation as possible, so that this article could be used as  a reference.

ODM Gold topology is an enterprise deployment of several ODM environments, each in an individual namespace, within several clusters. 


Schema of an ODM Gold topology (fig. 1)

A minimal Gold topology consists of the following environments:  Authoring, Sandboxes, Pre-prod, and Production. 

A full Gold topology consists of an Authoring, Sandboxes, a Pre-prod, and several Production environments.   

There is one Decision Center to govern all Decision Servers. 

Gold topology is best suited for applications with high production constraints (Large, High availability HA).

ODM on Certified Kubernetes is deployed using a Helm chart, where you can have your own database for Authoring, Pre-prod and Production environments. The databases can be externalized and separated. 

2. Gold topology installation example

As an example, for this article, we will setup an ODM Gold topology that contains 5 different environments on 2 separate clusters.

The first cluster, referred to as the “Authoring cluster”:

·      is an Amazon EKS cluster hosted in eu-west-3 region.

·      uses Azure Active Directory using OIDC for IAM.

·      relies on an external PostgreSQL database using Amazon RDS, except for Sandbox2 which uses an internal database provided by IBM ODM.

·      provides an Authoring environment: your production Decision Center to govern all Decision Server of this cluster.

·      has at least one sandbox environments: one or more place to test your ruleset before deploying into Pre-production environment. We will deploy one sandbox with an external database, and one sandbox with the internal database bundled with ODM for Kubernetes (not recommended for production).

·      contains a Pre-Production environment: a place to finalize your validation of your ruleset before deploying into Production (performance, long running tests, ...).

The second cluster, referred to as the “Production cluster”:

·      is an OpenShift cluster hosted on premise.

·      uses a local Active Directory using LDAPS for IAM.

·      leverages an external Db2 database to persist the data.

·      is managed with a single environment named Production, a place where you will deploy & execute your RuleApps for production.


Table summarizing the Production and Authoring cluster setup (fig. 2)

2.1 Knowledge base

The documentation entry point is the section Installing ODM releases on Certified Kubernetes.

The procedure can be decomposed as following:

-       Handling the licensing: define which parameters to set per Helm release, and install the IBM Licensing Service (ILS) once per cluster.

-       Handling the requirements: software and system minimal requirements.

-       Handling the persistence: database creation, and its connection details.

-       Handling the security: security context to adapt to current usage.

-       Handling the users: predefined and/or LDAP users to be defined, and ODM role definition.

The Prerequisites page provides you information about License agreement and how to install the IBM License Service inside your Kubernetes cluster. You can also find information about the software, persistent, security and user access requirements.

For information about configuring database, see Configuring an external database.  You may also go through Customizing ODM for production to find out more details about how to protect your containers, or to define the users and groups that access Decision Center and Decision Server, and other configurations.

2.2 Authoring Cluster

In this section, we will focus on Deploying IBM Operational Decision Manager on Amazon EKS  to guide you in deploying various ODM environments in the Authoring cluster.

2.2.1 Prerequisites

1.     Make sure that you have installed AWS CLI and relevant command line tools. See Getting started with Amazon EKS for more details.

2.     You must install the IBM License Service (once) in your Amazon EKS cluster. For more information, see the section “On other platforms” in Licensing and metering.

3.     Obtain the Helm chart. See Step 1 of  Installing a Helm release of ODM for production. Our procedure is following the “Using the IBM Entitled registry with your IBMid” option which requires a secret retrieving your entitlement key to access the IBM Entitled registry. Spare this data for step 3 in each installation procedure.

4.     Create your PostgreSQL databases dedicated to Authoring, Pre-prod and Sandbox1 environments. This is not needed for Sandbox2 as it uses an internal database.

5.     Run the following commands to create add and update ibm-helm-repo

HELM_REPO="https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm"

helm repo add ibm-helm-repo $HELM_REPO

helm repo update

2.2.2 Installing the Authoring environment

This environment is composed of two Decision Center replicas, two Decision Runner replicas, and a single Decision Server console. Its purpose is to govern, author and deploy your decision services.

Authoring environment requirements and parameters summary (fig. 3)

Procedure

1.     Create a namespace for your ODM Authoring environment. For example:

kubectl create ns authoring

2.     Set context to this namespace

kubectl get ns

kubectl config set-context --current --namespace=authoring

3.  Create a pull secret using your entitlement key

kubectl create secret docker-registry my-odm-pull-secret \ --docker-server=cp.icr.io --docker-username=cp \

--docker-password="<API_KEY_GENERATED>" --docker-email=<USER_EMAIL>

where

-       <API_KEY_GENERATED> is the entitlement key that you retrieve from MyIBM Container Software Library using your IBMid. Make sure you enclose the key in double quotation marks.

-       <USER_EMAIL> is the email address that is associated with your IBMid.

4.     Create a secret with your PostgreSQL database credential.

kubectl apply -f postgres-secret.yaml

An example of the YAML file:

apiVersion: v1

kind: Secret

metadata:

  name: my-odm-auth-secret-postgres

type: Opaque

stringData:

  db-user: "my_user"

  db-password: "my_password"

5.     Create secrets that hold your Azure AD, Microsoft certificates and the ODM configuration files:

keytool -printcert -sslserver login.microsoftonline.com -rfc > microsoft.crt

kubectl create secret generic my-odm-auth-secret-ms --from-file=tls.crt=microsoft.crt

kubectl create secret generic my-odm-auth-secret-digicert --from-file=tls.crt=digicert.crt

kubectl create secret generic my-odm-auth-secret-azuread --from-file=OdmOidcProviders.json=./output/OdmOidcProviders.json --from-file=openIdParameters.properties=./output/openIdParameters.properties --from-file=openIdWebSecurity.xml=./output/openIdWebSecurity.xml --from-file=webSecurity.xml=./output/webSecurity.xml

Follow the instructions in this article Create secrets to configure ODM with Azure AD to obtain the certificates and the related configuration files.
Note:  You can customize the generated
websecurity.xml to add additional user with basic authentication in the <basicRegistry> element if needed.

6.     Customize the values.yaml file and specify the values of the parameters per ODM Authoring environment to install the chart.


Here is a sample of myvalues-authoring.yaml file allowing an ODM Authoring deployment containing 2 Decision Center replicas, 2 Decision Runner replica, and a single Decision Server console:

customization:
  runAsUser: ""
  deployForProduction: true
  authSecretRef: my-odm-auth-secret-azuread
  trustedCertificateList:
    - my-odm-auth-secret-ms
    - my-odm-auth-secret-digicert
license: true
oidc:  enabled: true

serviceAccountName: ''
service:
  #  enableTLS=true (default value)
  type: ClusterIP
  ingress:
    enabled: true
    host: odm-authoring.<my_company>.aws.com
    tlsHosts: odm-authoring.<my_company>.aws.com
    #    tlsSecretRef: ingress-tls
    annotations:
      - alb.ingress.kubernetes.io/backend-protocol: HTTPS
      - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3:XXXXX:certificate/YYYYY
      - alb.ingress.kubernetes.io/scheme: internet-facing
      - alb.ingress.kubernetes.io/target-type: ip
externalDatabase:
  type: postgresql
  secretCredentials: my-odm-auth-secret-postgres
  databaseName: odmgolda
  serverName: my-odm-gold-db.cluster-XXX.eu-west-3.rds.amazonaws.com
  port: '5432'
image:
  repository: cp.icr.io/cp/cp4a/odm
  pullSecrets: my-odm-pull-secret
decisionCenter:
  enabled: true
  extendRoleMapping: true
  replicaCount: 2
  resources:
    limits:
      cpu: '2'
      memory: 8Gi
    requests:
      cpu: '2'
      memory: 4Gi
decisionServerRuntime:
  enabled: false
decisionRunner:
  enabled: true
  extendRoleMapping: true
  replicaCount: 2
  resources:
    limits:
      cpu: '2'
      memory: 2Gi
    requests:
      cpu: '2'
      memory: 2Gi
decisionServerConsole:
  extendRoleMapping: true
  resources:
    limits:
      cpu: '2'
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 512Mi

To know more about the ODM parameters, see  ODM for production configuration parameters.

For information about AWS Load Balancer Controller, see https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/.

7.     Install the ODM Authoring deployment in authoring namespace with the customized myvalues-authoring.yaml file:

helm install ibm-odm-auth ibm-helm-repo/ibm-odm-prod -f myvalues-authoring.yaml -n authoring --version 22.2.0

8.     To check the installation status, you can run the following commands:

helm status ibm-odm-auth
helm get values ibm-odm-auth

9.     Edit Ingress

kubectl edit ingress ibm-odm-auth-odm-ingress

a.     Set spec.IngressClassName in the Ingress instance to alb.

spec:

  ingressClassName : alb

b.     Check that rules.host parameter is set with the value odm-authoring. <my_company>.aws.com as defined in your YAML file. It will be needed when creating the DNS record in the next step.

c.     Make sure that the alb.ingress.* annotations are present in the metadata of the Ingress instance.


Description of Ingress object in current namespace (fig. 4)


10.  Check that the corresponding load balancer instance has been created successfully. In AWS console, search for EC2 resources, go to Load balancers, and find your instance.

TIPS: if you had too many ingresses, you could filter using your namespace name.


Load balancer list in current namespace (fig. 5)

11.  Enable Sticky session for Decision Center:

a.     In AWS console, search for Targets groups, edit each of the target group.

b.     Enable Stickiness.

c.     Set Stickiness type to “Application-based cookie”.

d.     Set the Stickiness duration to 8 hours which corresponds to the invalidation timeout set in Decision Center.

e.     Set “App cookie name” to <JSESSIONID_DC_RELEASE_NAME>

 

Target groups edition wizard (fig. 6)

To know more about sticky sessions, see Sticky sessions for your Application Load Balancer.

12.  In AWS console, search for Route 53 resources, go to Hosted zones, and create a DNS record with the following options:


Record wizard (fig. 7)

13.  As a result, the Decision endpoints will be:

·      https://odm-authoring.<my_company>.aws.com/decisioncenter

·      https://odm-authoring.<my_company>.aws.com/res

·      https://odm-authoring.<my_company>.aws.com/DecisionRunner

14.  Register the redirect URLs into your Azure AD application as explained in the Step 2 of this documentation section.

2.2.3 Installing the Sandbox1 environment

This environment is composed of a single Decision Server console and a single Decision Server runtime.  The goal of this environment is to deliver a sandbox to test and execute the Decision Services for a developer or a development team.  The sandbox environment will make use of an external PostgreSQL database so that the first round of tests can be done against imported production data.


Sandbox1 environment requirements and parameters summary (fig. 8)

Procedure

1.     Create a namespace for your ODM Pre-prod environment. For example:

kubectl create ns sandbox1

2.     Set context to this namespace

kubectl get ns

kubectl config set-context --current --namespace=sandbox1

3.     Create the secrets as mentioned in Authoring section for the entitlement registry, PostgreSQL, AzureAD certificates and ODM configuration files. Follow Step 3 to 5 described in Authoring section.

4.     Customize the values.yaml file and specify the values of the parameters per ODM Sandbox1 environment to install the chart.


Here is a sample of myvalues-sandbox1.yaml file to deploy a Decision Server runtime and Decision Server console. Note that the parameters customization.deployForProduction is set to false.

customization:
  runAsUser: ""
  deployForProduction: false
  authSecretRef: my-odm-auth-secret-azuread
  trustedCertificateList:
    - my-odm-auth-secret-ms
    - my-odm-auth-secret-digicert
license: true
oidc:
  enabled: true
serviceAccountName: ''
service:
  #  enableTLS=true (default value)
  type: ClusterIP
  ingress:
    enabled: true
    host: odm-sandbox1. <my_company>.aws.com
 
  tlsHosts: odm-sandbox1. <my_company>.aws.com
 
  #    tlsSecretRef: ingress-tls
    annotations:
      alb.ingress.kubernetes.io/backend-protocol: HTTPS
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3: XXXXX:certificate/YYYYY
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
externalDatabase:
  type: postgresql
  secretCredentials: my-odm-auth-secret-postgres
  databaseName: odmgolds1
  serverName: my-odm-gold-db.cluster-XXX.eu-west-3.rds.amazonaws.com
  port: '5432'
image:  repository: cp.icr.io/cp/cp4a/odm
  pullSecrets: my-odm-pull-secret
decisionCenter:
  enabled: false
decisionServerRuntime:
  enabled: true
  extendRoleMapping: true
  replicaCount: 1
  resources:
    limits:
      cpu: '1'
      memory: 2Gi
    requests:
      cpu: '1'
      memory: 2Gi
decisionRunner:
  enabled: false
decisionServerConsole:
  extendRoleMapping: true
  resources:
    limits:
      cpu: '1'
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 512Mi

To know more about the ODM parameters, see  ODM for production configuration parameters.

5.     Install ODM Sandbox1 deployment in sandbox1 namespace using the customized myvalues-sandbox1.yaml file:

helm install ibm-odm-sandbox1 ibm-helm-repo/ibm-odm-prod -f myvalues-sandbox1.yaml -n sandbox1 --version 22.2.0

6.     To check the installation status, you can run the following commands:

helm status ibm-odm-sandbox1
helm get values ibm-odm-sandbox1

7.     Edit Ingress (ibm-odm-sandbox1-odm-ingress) as described in Step 9 of Authoring section.

a.     Check that the rules.host parameter is set with odm-sandbox1. <my_company>.aws.com as defined in your YAML file.

8. Add a DNS record in Routes 53. Take note that the record name should be configured as odm-sandbox1.

9. The Decision Console and Decision Runtime endpoints will be:

a.     https://odm-sandbox1.<my_company>.aws.com/res

b.     https://odm-sandbox1.<my_company>.aws.com/DecisionService

10.  Register the ODM redirect URLS as described in Step 14 of Authoring section.

2.2.4 Installing the Sandbox2 environment

This second sandbox is also composed of a single Decision Server console and a single Decision Server runtime. However, we will illustrate the usage of an internal DB.


Sandbox2 environment requirements and parameters summary (fig. 9)

Procedure

1.     Create a namespace for your ODM Pre-prod environment. For example:

kubectl create ns sandbox2

2.     Set context to this namespace

kubectl get ns

kubectl config set-context --current --namespace=sandbox2

3.     Create the secrets as mentioned in Authoring section for the entitlement registry, AzureAD certificates and ODM configuration files. Follow Step 3 and 5 described in Authoring section.

4.     Retrieve the storage class set up on your cluster.

kubectl get sc

For example:


5.     Customize the values.yaml file and specify the values of the parameters per ODM Sandbox2 environment to install the chart.


Here is a sample of myvalues-sandbox2.yaml file to deploy a Decision Server runtime and Decision Server console. Note that the parameters customization.deployForProduction is set to false.

customization:
  runAsUser: ""
  deployForProduction: false
  authSecretRef: my-odm-auth-secret-azuread
  trustedCertificateList:
    - my-odm-auth-secret-ms
    - my-odm-auth-secret-digicert
license: true
oidc:
  enabled: true
serviceAccountName: ''
service:
  #  enableTLS=true (default value)
  type: ClusterIP
  ingress:
    enabled: true
    host: odm-sandbox1.<my_company>.aws.com
 
  tlsHosts: odm-sandbox1.<my_company>.aws.com
 
  #    tlsSecretRef: ingress-tls
    annotations:
      alb.ingress.kubernetes.io/backend-protocol: HTTPS
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3: XXXXX:certificate/YYYYY
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip

internalDatabase:

  persistence:

    enabled: true

    resources:

      requests:

        storage: 5Gi

    storageClassName: gp2

    useDynamicProvisioning: true

  populateSampleData: false
image:
  repository: cp.icr.io/cp/cp4a/odm
  pullSecrets: my-odm-pull-secret
decisionCenter:
  enabled: false
decisionServerRuntime:
  enabled: true
  extendRoleMapping: true
  replicaCount: 1
  resources:
    limits:
      cpu: '1'
      memory: 2Gi
    requests:
      cpu: '1'
      memory: 2Gi
decisionRunner:
  enabled: false
decisionServerConsole:
  extendRoleMapping: true
  resources:
    limits:
      cpu: '1'
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 512Mi

To know more about the ODM parameters, see  ODM for production configuration parameters.

6.     Install ODM Sandbox2 deployment in sandbox2 namespace using the customized myvalues-sandbox1.yaml file:

helm install ibm-odm-sandbox2 ibm-helm-repo/ibm-odm-prod -f myvalues-sandbox2.yaml -n sandbox2 --version 22.2.0

7.     To check the installation status, you can run the following commands:

helm status ibm-odm-sandbox2
helm get values ibm-odm-sandbox2

8.     Edit Ingress (ibm-odm-sandbox2-odm-ingress) as described in Step 9 of Authoring section.

a.     Check that the rules.host parameter is set with odm-sandbox2.<my_company>.aws.com  as defined in your YAML file.

9.     Add a DNS record in Routes 53. Take note that the record name should be configured as odm-sandbox2.

10.  The Decision Console and Decision Runtime endpoints will be:

a.     https://odm-sandbox2.<my_company>.aws.com/res

b.     https://odm-sandbox2.<my_company>.aws.com/DecisionService

11.  Register the ODM redirect URLS as described in Step 14 of Authoring section.

2.2.5 Installing the Pre-Production environment

This environment is composed of a single Decision Server console and several Decision Server runtimes. The purpose is to mimic the Production environment, to be able to run performance tests of the Decision Services before a deployment on Production.


Pre-prod environment requirements and parameters summary (fig. 10)

Procedure

1.     Create a namespace for your ODM Pre-prod environment. For example:

kubectl create ns pre-prod

2.     Set context to this namespace

kubectl get ns

kubectl config set-context --current --namespace=pre-prod

3.     Create the secrets as mentioned in Authoring section for the entitlement registry, PostgreSQL, AzureAD certificates and ODM configuration files. Follow Step 3 to 5 described in Authoring section.

4.     Customize the values.yaml file and specify the values of the parameters per ODM Pre-prod environment to install the chart.


Here is a sample of myvalues-preprod.yaml file allowing an ODM Pre-production deployment containing 3 Decision Server runtimes and a Decision Server console. Note that the parameters customization.deployForProduction is set to false.

customization:
  runAsUser: ""
  deployForProduction: false
  authSecretRef: my-odm-auth-secret-azuread
  trustedCertificateList:
    - my-odm-auth-secret-ms
    - my-odm-auth-secret-digicert
license: true
oidc:
  enabled: true
serviceAccountName: ''
service:
#  enableTLS=true (default value)
  type: ClusterIP
  ingress:
    enabled: true
    host: my-odm-pre-prod.aws.com
    tlsHosts: my-odm-pre-prod.aws.com
    annotations:
      - alb.ingress.kubernetes.io/backend-protocol: HTTPS
      - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3:XXXXXXXX:certificate/YYYYYYYYY
      - alb.ingress.kubernetes.io/scheme: internet-facing
      - alb.ingress.kubernetes.io/target-type: ip
externalDatabase:
  type: postgresql
  secretCredentials: my-odm-auth-secret-postgres
  databaseName: odmgoldpp
  serverName: my-odm-gold-db.cluster-XXX.eu-west-3.rds.amazonaws.com
  port: '5432'
image:
  repository: cp.icr.io/cp/cp4a/odm
  pullSecrets: my-odm-pull-secret
decisionCenter:
  enabled: false
decisionServerRuntime:
  enabled: true
  extendRoleMapping: true
  replicaCount: 3
  resources:
    limits:
      cpu: '2'
      memory: 2Gi
    requests:
      cpu: '2'
      memory: 2Gi
decisionRunner:
  enabled: false
decisionServerConsole:
  extendRoleMapping: true
  resources:
    limits:
      cpu: '2'
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 512Mi

To know more about the ODM parameters, see  ODM for production configuration parameters.

5.     Install the ODM Pre-production deployment in pre-prod namespace with the customized myvalues-preproduction.yaml file:

helm install ibm-odm-preprod ibm-helm-repo/ibm-odm-prod -f myvalues-preproduction.yaml -n pre-prod --version 22.2.0

6.     To check the installation status, you can run the following commands:

helm status ibm-odm-preprod
helm get values ibm-odm-preprod

7.     Edit Ingress (ibm-odm-preprod-odm-ingress) as described in Step 9 of Authoring section.

a.     Check that the rules.host parameter is set with odm-preprod. <my_company>.aws.com as defined in your YAML file.

8. Add a DNS record in Routes 53. Take note that the record name should be configured as odm-preprod.

9.     The Decision Console and Decision Runtime endpoints will be:

a.     https://odm-preprod.<my_company>.aws.com/res

b.     https://odm-preprod.<my_company>.aws.com/DecisionService

10.  Register the ODM redirect URLS as described in Step 14 of Authoring section.

2.3 Production Cluster

The procedure in this section aims to guide you through the ODM Production deployment in the OpenShift cluster.

2.3.1 Prerequisites

1.     Make sure that you have installed OpenShift “oc” and relevant command line tools.

2.     You must install the IBM License Service (once) in your OpenShift cluster. For more information, see the section “In OpenShift” in Licensing and metering.

3.     Obtain the Helm chart. See Step 1 of  Installing a Helm release of ODM for production. (If you have not done so)

4.     Run the following commands to create add and update ibm-helm repo (If you have not done so)

HELM_REPO="https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm"

helm repo add ibm-helm-repo $HELM_REPO

helm repo update

2.3.2 Installing the Production environment


Production environment requirements and parameters summary (fig. 11)

Procedure

When the preparations are done, you can proceed with the ODM deployment on OpenShift cluster using the Helm chart.

1.     Create a namespace for your ODM Production environment. For example:

oc new-project production

2.     Create the secret my-odm-prod-secret-ldap for LDAP configuration where the webSecurity.xml can be one of the options described in Configuring user access without OpenID.  
Note:  You can customize the
websecurity.xml to add additional user with basic authentication if needed.

oc create secret generic my-odm-prod-secret-ldap --from-file=webSecurity.xml=webSecurity.xml

3.     Create the secret my-odm-prod-secret-db2-ssl containing the DB2 SSL certificate. For more information on how to generate the Db2 SSL certificate, see Self-signing digital certificates. An example to create the secret:

oc create secret generic my-odm-prod-secret-db2-ssl --from-file="truststore.jks" --from-literal=truststore_password=password

4.     Create the secret my-odm-prod-secret-db2 for holding Db2 credentials and the secret my-odm-prod-secret-ldap-cert to include LDAP SSL certificate:

oc apply -f secret.yaml

where the secret.yaml file contains

apiVersion: v1

kind: Secret

metadata:

  name: my-odm-prod-secret-db2

type: Opaque

stringData:

  db-user: "myDb2User"

  db-password: "myDb2pass"

 

---

kind: Secret

apiVersion: v1

metadata:

  name: my-odm-prod-secret-ldap-cert

data:

  tls.crt: >-

    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUdJVENDQlFtZ0F3SUJBZ0lE

UblgyYXNpa2EweEgzZ1d1b1pqQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=
type: Opaque

 

 

For related information about securing LDAP by SSL, see Configuring LDAP over SSL.

5.     Customize the values.yaml file and specify the values of the parameters per ODM Production environment to install the chart. You can extract the values.yaml file from the Helm chart (ibm-odm-prod-version.tgz archive).

Here is a sample of myvalues-production.yaml file allowing an ODM Production deployment with external Db2 and Active Directory LDAP in SSL. Note that the parameters customization.deployForProduction is set to true.

customization:
  runAsUser: ""
  deployForProduction: true
  # holding secret with LDAP connection credentials
  authSecretRef: my-odm-prod-secret-ldap
  # Specify a list of secrets that encapsulate certificates in PEM format to be included in the truststore
  trustedCertificateList:
    - my-odm-prod-secret-ldap-cert
license: true
serviceAccountName: ''
service:
 enableRoute: true
externalDatabase:
  type: db2
  secretCredentials: my-odm-prod-secret-db2
  databaseName: odmgoldp
  serverName: my-db2-server-Name
  sslSecretRef: my-odm-prod-secret-db2-ssl
  port: '60001'
image:
  repository: cp.icr.io/cp/cp4a/odm
decisionCenter:
  enabled: false
decisionServerRuntime:
  enabled: true
  extendRoleMapping: true
  replicaCount: 3
  resources:
    limits:
      cpu: '2'
      memory: 2Gi
    requests:
      cpu: '2'
      memory: 2Gi
decisionRunner:
  enabled: false
decisionServerConsole:
  extendRoleMapping: true
  resources:
    limits:
      cpu: '2'
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 512Mi

To know more about the ODM parameters, see  ODM for production configuration parameters.

6.     Install the ODM Production environment in production namespace with the customized myvalues-production.yaml file using the following command:

helm install ibm-odm-prod ibm-helm-repo/ibm-odm-prod -f myvalues-production.yaml -n production --version 22.2.0

7.     To check the installation status, you can run the following commands:

helm status ibm-odm-prod
helm get values ibm-odm-prod

8.     To get the Decision Console and Decision Runtime endpoints, you can run this command:

oc get route

3. Validate your ODM environments

Once everything is well configured and deployed, you can perform post installation tasks as described in Completing post-installation tasks. 

Coming soon:  We will provide you an article with additional validations at ODM level. Stay tuned!

0 comments
72 views

Permalink