Business Automation

 View Only

How do I install ODM 9.0.0.1 Gold topology on Certified Kubernetes?

By Said Moutaa posted Thu December 19, 2024 05:41 AM

  

How do I install ODM 9.0.0.1 Gold topology on Certified Kubernetes?  

 

An example of Gold Topology deployed on a private RedHat OpenShift Container Platform (OCP) and a public Amazon Elastic Kubernetes Service (EKS). 

 

This article is part of an article series around Operational Decision Manager (ODM) on Certified Kubernetes. For more information about ODM environments and the topologies, see our ODM topologies blog entries. 

1. Introduction 

 

This document aims to give an example on how to install this topology using a private RedHat OpenShift Container Platform (OCP) cluster and a public Amazon Elastic Kubernetes Service (EKS) cluster.  

We tried to cover as many types of installation as possible, so that this article could be used as a reference. 

 

ODM Gold topology is an enterprise deployment of several ODM environments, each in an individual namespace, within several clusters.  

 

Schema of an ODM Gold topology (fig. 1) 

 

A minimal Gold topology consists of the following environments: Authoring, Sandboxes, Pre-prod, and Production. 

 

A full Gold topology consists of an Authoring, Sandboxes, a Pre-prod, and several Production environments.    

 

There is one Decision Center to govern all Decision Servers.  

 

Gold topology is best suited for applications with high production constraints (Large, High availability HA). 

 

ODM on Certified Kubernetes is deployed using a Helm chart, where you can have your own database for Authoring, Preproduction and Production environments. The databases can be externalized and separated.  

 

 

2. Gold topology installation example 

 

2.1 Knowledge base 

 

The documentation entry point is this section: Installing ODM releases on Certified Kubernetes 

 

The Prerequisites page provides you information about this installation step, and, can be decomposed as following: 

  • Handling the licensing: define which parameters to set per Helm release (production or non-production), and, install the IBM License Service (ILS) once per cluster. 

  • Handling the requirements: check software and system minimal requirements. 

  • Handling the persistence: create database, and, retrieve its connection details. 

  • Handling the security: consider security context to adapt to current usage. 

  • Handling the users: clarify predefined and/or LDAP users to be defined, and ODM role definition. 

 

For information about configuring database, see Configuring an external database. You may also go through Customizing ODM for production to find out more details about how to protect your containers, or to define the users and groups that access Decision Center and Decision Server, and other configurations. 

 

For more information about the License Service installation options, see this blog entry: https://community.ibm.com/community/user/automation/blogs/johanne-sebaux/2023/10/13/how-do-i-easily-install-and-use-ibm-license-servic  

 

2.2 Description 

 

As an example, for this article, we will setup an ODM Gold topology that contains 5 different environments on 2 separate clusters.  

 

The first cluster, referred to as the “Authoring cluster”: 

  • is an Amazon EKS cluster hosted in eu-west-3 region. 

  • uses Microsoft Entra ID (ex Azure Active Directory) using OIDC for IAM. 

  • relies on an external PostgreSQL database using Amazon RDS, except for Sandbox2 which uses an internal database provided by IBM ODM.  

  • provides an Authoring environment: your production Decision Center to govern all Decision Server of this cluster. 

  • has at least one sandbox environments: one or more places to test your ruleset before deploying into Pre-production environment. We will deploy one sandbox with an external database, and one sandbox with the internal database bundled with ODM for Kubernetes (not recommended for production). 

  • contains a Pre-Production environment: a place to finalize your validation of your ruleset before deploying into Production (performance, long running tests, ...). 

 

The second cluster, referred to as the “Production cluster”: 

  • is an OpenShift cluster hosted on premise. 

  • uses a local Active Directory using LDAPs for IAM. 

  • leverages an external Db2 database to persist the data.  

  • is managed with a single environment named Production, a place where you will deploy & execute your RuleApps for production. 

 

Une image contenant texte, capture d’écran, Police, nombre

Description générée automatiquement 

Table summarizing the Production and Authoring cluster setup (fig. 2) 

3. Authoring cluster 

 

In this section, we will focus on Deploying IBM Operational Decision Manager on Amazon EKS  to guide you in deploying various ODM environments in the Authoring cluster.  

 

3.1 Authoring cluster prerequisites 

 

  1. Make sure that you have installed AWS CLI and relevant command line tools. See Getting started with Amazon EKS for more details.  

  1. You must install the IBM License Service (once) in your Amazon EKS cluster. For more information on the installation procedure, see the documentation section Installing License Service on the Kubernetes cluster. Validate this installation using this documentation section: Checking License Service components. 

  1. Obtain the Helm chart. See Procedure of Installing a Helm release of ODM for production.  

  1. Create a PostgresSQL database instance. For more information, please read this documentation page: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html 

  1. Create your PostgreSQL databases dedicated to Authoring, Pre-prod and Sandbox1 environments. This is not needed for Sandbox2 as it uses an internal database.  

  1. Run the following commands to create add and update ibm-helm-repo  

 

HELM_REPO="https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm" 

helm repo add ibm-helm-repo $HELM_REPO 

helm repo update 

3.2 Installing the Authoring environment 

 

3.2.1. Authoring environment architecture and setup 

 

This environment is composed of two Decision Center replicas, two Decision Runner replicas, and a single Decision Server console. Its purpose is to govern, author and deploy your decision services. 

 

Une image contenant texte, capture d’écran, nombre, Police

Description générée automatiquement 

Authoring environment requirements and parameters summary (fig. 3) 

 

Prior to install, we need to create a location where to deploy: 

  1. Create a namespace for your ODM Authoring environment. For example:  

kubectl create ns authoring 

  1. Set context to this namespace 

kubectl get ns 

kubectl config set-context --current --namespace=authoring 

 

3.2.2. Authoring environment prerequisite procedure 

 

  1. Create a pull secret using your entitlement key 

kubectl create secret docker-registry my-odm-pull-secret \ --docker-server=cp.icr.io --docker-username=cp \ 

--docker-password="<API_KEY_GENERATED>" --docker-email=<USER_EMAIL> 

where  

  • <API_KEY_GENERATED> is the entitlement key that you retrieve from MyIBM Container Software Library using your IBMid. Make sure you enclose the key in double quotation marks. 

  • <USER_EMAIL> is the email address that is associated with your IBMid. 

  1. Create a secret with your PostgreSQL database credential. 

kubectl apply -f postgres-secret.yaml 

An example of the YAML file: 

apiVersion: v1 

kind: Secret 

metadata: 

  name: my-odm-auth-secret-postgres 

type: Opaque 

stringData: 

  db-user: <my_user> 

  db-password: <my_password> 

  1. Create secrets that hold your Microsoft Entra ID (ex AzureAD), Microsoft certificates and the ODM configuration files: 

keytool -printcert -sslserver login.microsoftonline.com -rfc > microsoft.crt 

kubectl create secret generic my-odm-auth-secret-ms --from-file=tls.crt=microsoft.crt 
curl --silent --remote-name https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem 

kubectl create secret generic my-odm-auth-secret-digicert --from-file=tls.crt=DigiCertGlobalRootCA.crt.pem 

kubectl create secret generic my-odm-auth-secret-azuread --from-file=OdmOidcProviders.json=./output/OdmOidcProviders.json --from-file=openIdParameters.properties=./output/openIdParameters.properties --from-file=openIdWebSecurity.xml=./output/openIdWebSecurity.xml --from-file=webSecurity.xml=./output/webSecurity.xml 

Follow the instructions in this article Create secrets to configure ODM with Microsoft Entra ID (ex AzureAD to obtain the certificates and the related configuration files. 

Note: You can customize the generated webSecurity.xml to add additional user with basic authentication in the <basicRegistry> element if needed. 

At this stage, the authoring namespace is ready to receive an ODM deployment. 

 

3.2.3. Authoring environment installation procedure 

 

  1. Customize the values.yaml file and specify the values of the parameters per ODM Authoring environment to install the chart.  

Here is a sample of myvalues-authoring.yaml file allowing an ODM Authoring deployment containing 2 Decision Center replicas, 2 Decision Runner replica, and a single Decision Server console: 

customization: 

  runAsUser: "" 

  deployForProduction: true 

  authSecretRef: my-odm-auth-secret-azuread 

  trustedCertificateList: 

    - my-odm-auth-secret-ms 

    - my-odm-auth-secret-digicert

license: true 

oidc: 

  enabled: true 

 

serviceAccountName: '' 

service: 

  #  enableTLS=true (default value) 

  type: ClusterIP 

  ingress: 

    enabled: true 

    host: odm-authoring.<my_company>.aws.com 

    tlsHosts: odm-authoring.<my_company>.aws.com 

#    tlsSecretRef: ingress-tls 

    annotations: 

      - alb.ingress.kubernetes.io/backend-protocol: HTTPS 

      - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3:XXXXX:certificate/YYYYY 

      - alb.ingress.kubernetes.io/scheme: internet-facing 

      - alb.ingress.kubernetes.io/target-type: ip 

externalDatabase: 

  type: postgresql 

  secretCredentials: my-odm-auth-secret-postgres 

  databaseName: odmgolda 

  serverName: my-odm-gold-db.cluster-XXX.eu-west-3.rds.amazonaws.com 

  port: '5432' 

image: 

  repository: cp.icr.io/cp/cp4a/odm 

  pullSecrets: my-odm-pull-secret 

decisionCenter: 

  enabled: true 

  extendRoleMapping: true 

  replicaCount: 2 

  resources: 

    limits: 

      cpu: '2' 

      memory: 8Gi 

    requests: 

      cpu: '2' 

      memory: 4Gi 

decisionServerRuntime: 

  enabled: false 

decisionRunner: 

  enabled: true 

  extendRoleMapping: true 

  replicaCount: 2 

  resources: 

    limits: 

      cpu: '2' 

      memory: 2Gi 

    requests: 

      cpu: '2' 

      memory: 2Gi 

decisionServerConsole: 

  extendRoleMapping: true 

  resources: 

    limits: 

      cpu: '2' 

      memory: 1Gi 

    requests: 

      cpu: 500m 

   memory: 512Mi 

 

To know more about the ODM parameters, see  ODM for production configuration parameters.  

For information about AWS Load Balancer Controller, see https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.7/guide/ingress/annotations/ . 
 

  1. Install the ODM Authoring deployment in authoring namespace with the customized myvalues-authoring.yaml file: 

helm install ibm-odm-auth ibm-helm-repo/ibm-odm-prod -f myvalues-authoring.yaml -n authoring --version 23.2.0 

  1. To check the installation status, you can run the following commands: 

helm status ibm-odm-auth 
helm get values ibm-odm-auth 

At this stage, ODM pods are deployed in the authoring namespace. We need to handle the Ingress policy to enable access to the ODM Services. 

 

3.2.4. Authoring environment Ingress configuration 

  1. Edit Ingress 

kubectl edit ingress ibm-odm-auth-odm-ingress 

  1. Set spec.IngressClassName in the Ingress instance to alb. 

    spec:

            ingressClassName : alb 

  2. Check that rules.host parameter is set with the value odm-authoring. <my_company>-aws.com as defined in your YAML file. It will be needed when creating the DNS record in the next step. 
  3. Make sure that the alb.ingress.* annotations are present in the metadata of the Ingress instance. 

Graphical user interface, text, application, email

Description automatically generatedDescription of Ingress object in current namespace (fig. 4) 
 

 

  1. Check that the corresponding load balancer instance has been created successfully. In AWS console, search for EC2 resources, go to Load balancers, and find your instance. 

TIPS: if you had too many ingresses, you could filter using your namespace name. 

 

Load balancer list in current namespace (fig. 5) 

 

TIPS: if you do not see your load balancer listed, check that you have a Load Balancer Controller up and running using the following command: 

kubectl get deployment -n kube-system aws-load-balancer-controller 

NAME                                         READY   UP-TO-DATE   AVAILABLE 

aws-load-balancer-controller     2/2          2                     2       

  

  1. Enable Sticky session for Decision Center: 

  1. In AWS console, search for Targets groups, edit each of the target group. 

  1. Enable Stickiness. 

  1. Set Stickiness type to “Application-based cookie”. 

  1. Set the Stickiness duration to 8 hours which corresponds to the invalidation timeout set in Decision Center. 

  1. Set “App cookie name” to JSESSIONID_DC_<RELEASE_NAME> 

Une image contenant texte, capture d’écran, logiciel, Police

Description générée automatiquement 

Target groups edition wizard (fig. 6) 

To know more about sticky sessions, see Sticky sessions for your Application Load Balancer. 

  1. In AWS console, search for Route 53 resources, go to Hosted zones, select your target hosted zone (which should look like <my_company>-aws.com), and create a DNS record with the following options:  

Record wizard (fig. 7) 

Caution: make sure to  

  1. As a result, the Decision endpoints will be: 

  • https://odm-authoring.<my_company>-aws.com/decisioncenter 

  • https://odm-authoring.<my_company>-aws.com/res 

  • https://odm-authoring.<my_company>-aws.com/DecisionRunner 

  1. Register the redirect URLs into your Microsoft Entra ID (ex AzureAD) application as explained in this documentation section: Complete post-deployment tasks. 

 

3.3 Installing the Sandbox1 environment 

 

3.3.1. Sanbox1 environment architecture and setup 

 

This environment is composed of a single Decision Server console and a single Decision Server runtime.  The goal of this environment is to deliver a sandbox to test and execute the Decision Services for a developer or a development team.  The sandbox environment will make use of an external PostgreSQL database so that the first round of tests can be done against imported production data. 

 

Une image contenant texte, nombre, Police, reçu

Description générée automatiquement 

Sandbox1 environment requirements and parameters summary (fig. 8) 

 

3.3.2. Sanbox1 environment installation procedure 

 

  1. Create a namespace for your ODM Pre-prod environment. For example:  

kubectl create ns sandbox1 

  1. Set context to this namespace 

kubectl get ns 

kubectl config set-context --current --namespace=sandbox1 

  1. Create the secrets as mentioned in Authoring section for the entitlement registry, PostgreSQL, Microsoft Entra ID (ex AzureAD) certificates and ODM configuration files. Follow procedure described in Authoring environment prerequisite procedure section. 

  1. Customize the values.yaml file and specify the values of the parameters per ODM Sandbox1 environment to install the chart.  

 
Here is a sample of myvalues-sandbox1.yaml file to deploy a Decision Server runtime and Decision Server console. Note that the parameters customization.deployForProduction is set to false. 

customization: 

  runAsUser: "" 

  deployForProduction: false 

  authSecretRef: my-odm-auth-secret-azuread 

  trustedCertificateList: 

    - my-odm-auth-secret-ms 

    - my-odm-auth-secret-digicert 

license: true 

oidc: 

  enabled: true 

serviceAccountName: '' 

service: 

  #  enableTLS=true (default value) 

  type: ClusterIP 

  ingress: 

    enabled: true 

    host: odm-sandbox1.<my_company>-aws.com   

    tlsHosts: odm-sandbox1.<my_company>-aws.com   

    #    tlsSecretRef: ingress-tls 

    annotations: 

      alb.ingress.kubernetes.io/backend-protocol: HTTPS 

      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3: XXXXX:certificate/YYYYY 

      alb.ingress.kubernetes.io/scheme: internet-facing 

      alb.ingress.kubernetes.io/target-type: ip 

 

externalDatabase: 

  type: postgresql 

  secretCredentials: my-odm-auth-secret-postgres 

  databaseName: odmgolds1 

  serverName: my-odm-gold-db.cluster-XXX.eu-west-3.rds.amazonaws.com 

  port: '5432' 

image: 

  repository: cp.icr.io/cp/cp4a/odm 

  pullSecrets: my-odm-pull-secret 

decisionCenter: 

  enabled: false 

decisionServerRuntime: 

  enabled: true 

  extendRoleMapping: true 

  replicaCount: 1 

  resources: 

    limits: 

      cpu: '1' 

      memory: 2Gi 

    requests: 

      cpu: '1' 

      memory: 2Gi 

decisionRunner: 

  enabled: false 

decisionServerConsole: 

  extendRoleMapping: true 

  resources: 

    limits: 

      cpu: '1' 

      memory: 1Gi 

    requests: 

      cpu: 500m 

   memory: 512Mi 

To know more about the ODM parameters, see ODM for production configuration parameters. 
 

  1. Install ODM Sandbox1 deployment in sandbox1 namespace using the customized myvalues-sandbox1.yaml file: 

helm install ibm-odm-sandbox1 ibm-helm-repo/ibm-odm-prod -f myvalues-sandbox1.yaml -n sandbox1 --version 23.2.0

 

  1. To check the installation status, you can run the following commands: 

helm status ibm-odm-sandbox1  
helm get values ibm-odm-sandbox1 

 

  1. Edit Ingress (ibm-odm-sandbox1-odm-ingress) as described in Step 1 of Authoring Ingress configuration section. Check that the rules.host parameter is set to:  

odm-sandbox1.<my_company>-aws.com

  as defined in your YAML file.  

  1. Add a DNS record in Routes 53. Take note that the record name should be configured as odm-sandbox1. 

  1. The Decision Console and Decision Runtime endpoints will be: 

  1. https://odm-sandbox1.<my_company>-aws.com/res 

  1. https://odm-sandbox1.<my_company>-aws.com/DecisionService 

  1. Register the redirect URLs into your Microsoft Entra ID (ex AzureAD) application as explained in this documentation section: Complete post-deployment tasks. 

 

3.4 Installing the Sandbox2 environment 

3.4.1. Sanbox2 environment architecture and setup 

 

This second sandbox is also composed of a single Decision Server console and a single Decision Server runtime. However, we will illustrate the usage of an internal DB. 

 

Une image contenant texte, capture d’écran, nombre, Police

Description générée automatiquement 

Sandbox2 environment requirements and parameters summary (fig. 9) 

 

3.4.2. Sanbox2 environment installation procedure 

 

  1. Create a namespace for your ODM Pre-prod environment. For example:  

kubectl create ns sandbox2 

  1. Set context to this namespace 

kubectl get ns 

kubectl config set-context --current --namespace=sandbox2 

  1. Create the secrets as mentioned in Authoring section for the entitlement registry, Microsoft Entra ID (ex AzureAD) certificates and ODM configuration files. Follow procedure described in Authoring environment prerequisite procedure section 

  1. Retrieve the storage class set up on your cluster. 

kubectl get sc 

For example: 

 

  1. Customize the values.yaml file and specify the values of the parameters per ODM Sandbox2 environment to install the chart.  

 
Here is a sample of myvalues-sandbox2.yaml file to deploy a Decision Server runtime and Decision Server console. Note that the parameters customization.deployForProduction is set to false. 

customization: 

  runAsUser: "" 

  deployForProduction: false 

  authSecretRef: my-odm-auth-secret-azuread 

  trustedCertificateList: 

    - my-odm-auth-secret-ms 

    - my-odm-auth-secret-digicert 

license: true 

oidc: 

  enabled: true 

serviceAccountName: '' 

service: 

  #  enableTLS=true (default value) 

  type: ClusterIP 

  ingress: 

    enabled: true 

    host: odm-sandbox2.<my_company>-aws.com   

    tlsHosts: odm-sandbox2.<my_company>-aws.com   

    #    tlsSecretRef: ingress-tls 

    annotations: 

      alb.ingress.kubernetes.io/backend-protocol: HTTPS 

      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3: XXXXX:certificate/YYYYY 

      alb.ingress.kubernetes.io/scheme: internet-facing 

      alb.ingress.kubernetes.io/target-type: ip 

 

internalDatabase: 

  persistence: 

    enabled: true 

    resources: 

      requests: 

        storage: 5Gi 

    storageClassName: gp2 

    useDynamicProvisioning: true 

  populateSampleData: false 

image: 

  repository: cp.icr.io/cp/cp4a/odm 

  pullSecrets: my-odm-pull-secret 

decisionCenter: 

  enabled: false 

decisionServerRuntime: 

  enabled: true 

  extendRoleMapping: true 

  replicaCount: 1 

  resources: 

    limits: 

      cpu: '1' 

      memory: 2Gi 

    requests: 

      cpu: '1' 

      memory: 2Gi 

decisionRunner: 

  enabled: false 

decisionServerConsole: 

  extendRoleMapping: true 

  resources: 

    limits: 

      cpu: '1' 

      memory: 1Gi 

    requests: 

      cpu: 500m 

   memory: 512Mi 

To know more about the ODM parameters, see ODM for production configuration parameters. 
 

  1. Install ODM Sandbox2 deployment in sandbox2 namespace using the customized myvalues-sandbox1.yaml file: 

helm install ibm-odm-sandbox2 ibm-helm-repo/ibm-odm-prod -f myvalues-sandbox2.yaml -n sandbox2 --version 23.2.0 

 

  1. To check the installation status, you can run the following commands: 

helm status ibm-odm-sandbox2  
helm get values ibm-odm-sandbox2  

  1. Edit Ingress (ibm-odm-sandbox2-odm-ingress) as described in Step 1 of Authoring Ingress configuration section. Check that the rules.host parameter is set with  

odm-sandbox2.<my_company>-aws.com  

as defined in your YAML file.  

  1. Add a DNS record in Routes 53. Take note that the record name should be configured as odm-sandbox2. 

  1. The Decision Console and Decision Runtime endpoints will be: 

  1. https://odm-sandbox2.<my_company>-aws.com/res

  2. https://odm-sandbox2.<my_company>-aws.com/DecisionService 

  1. Register the redirect URLs into your Microsoft Entra ID (ex AzureAD) application as explained in this documentation section: Complete post-deployment tasks. 

 

3.5 Installing the Pre-Production environment 

 

3.5.1. Pre-production environment architecture and setup 

 

This environment is composed of a single Decision Server console and several Decision Server runtimes. The purpose is to mimic the Production environment, to be able to run performance tests of the Decision Services before a deployment on Production. 

 

Une image contenant texte, capture d’écran, nombre, Police

Description générée automatiquement 

Pre-prod environment requirements and parameters summary (fig. 10) 

 

3.5.2. Pre-production environment installation procedure 

 

  1. Create a namespace for your ODM Pre-prod environment. For example:  

kubectl create ns preproduction

  1. Set context to this namespace 

kubectl get ns 

kubectl config set-context --current --namespace=preproduction 

  1. Create the secrets as mentioned in Authoring section for the entitlement registry, PostgreSQL, Microsoft Entra ID (ex AzureAD) certificates and ODM configuration files. Follow procedure described in Authoring environment prerequisite procedure section. 

  1. Customize the values.yaml file and specify the values of the parameters per ODM Pre-prod environment to install the chart.  

Here is a sample of myvalues-preproduction.yaml file allowing an ODM Pre-production deployment containing 3 Decision Server runtimes and a Decision Server console. Note that the parameters customization.deployForProduction is set to false. 

customization: 

  runAsUser: "" 

  deployForProduction: false 

  authSecretRef: my-odm-auth-secret-azuread 

  trustedCertificateList: 

    - my-odm-auth-secret-ms 

    - my-odm-auth-secret-digicert 

license: true 

oidc: 

  enabled: true 

serviceAccountName: '' 

service: 

enableTLS=true (default value) 

  type: ClusterIP 

  ingress: 

    enabled: true 

    host: odm-preproduction.<my_company>-aws.com   

    tlsHosts: odm-preproduction.<my_company>-aws.com 

    annotations: 

      - alb.ingress.kubernetes.io/backend-protocol: HTTPS 

      - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-3:XXXXXXXX:certificate/YYYYYYYYY 

      - alb.ingress.kubernetes.io/scheme: internet-facing 

      - alb.ingress.kubernetes.io/target-type: ip 

externalDatabase: 

  type: postgresql 

  secretCredentials: my-odm-auth-secret-postgres 

  databaseName: odmgoldpp 

  serverName: my-odm-gold-db.cluster-XXX.eu-west-3.rds.amazonaws.com 

  port: '5432' 

image: 

  repository: cp.icr.io/cp/cp4a/odm 

  pullSecrets: my-odm-pull-secret 

decisionCenter: 

  enabled: false 

decisionServerRuntime: 

  enabled: true 

  extendRoleMapping: true 

  replicaCount: 3 

  resources: 

    limits: 

      cpu: '2' 

      memory: 2Gi 

    requests: 

      cpu: '2' 

      memory: 2Gi 

decisionRunner: 

  enabled: false 

decisionServerConsole: 

  extendRoleMapping: true 

  resources: 

    limits: 

      cpu: '2' 

      memory: 1Gi 

    requests: 

      cpu: 500m 

   memory: 512Mi 

To know more about the ODM parameters, see ODM for production configuration parameters. 

  1. Install the ODM Pre-production deployment in pre-prod namespace with the customized myvalues-preproduction.yaml file: 

helm install ibm-odm-prep ibm-helm-repo/ibm-odm-prod -f myvalues-preproduction.yaml -n preproduction --version 23.2.0 

  1. To check the installation status, you can run the following commands: 

helm status ibm-odm-prep 
helm get values ibm-odm-prep 

  1. Edit Ingress (ibm-odm-prep-odm-ingress) as described in Step 1 of Authoring Ingress configuration section. Check that the rules.host parameter is set with 

odm-preproduction.<my_company>-aws.com 

as defined in your YAML file.  

  1. Add a DNS record in Routes 53. Take note that the record name should be configured as odm-preproduction. 

  1. The Decision Console and Decision Runtime endpoints will be: 

  1. https://odm-preproduction.<my_company>-aws.com/res 

  1. https://odm-preproduction.<my_company>-aws.com/DecisionService 

  1. Register the redirect URLs into your Microsoft Entra ID (ex AzureAD) application as explained in this documentation section: Complete post-deployment tasks. 

 

4. Production cluster 

 

The procedure in this section aims to guide you through the ODM Production deployment in the OpenShift cluster 

 

4.1 Production cluster prerequisites 

 

  1. Make sure that you have installed OpenShift “oc” and relevant command line tools.  

  1. You must install the IBM License Service (once) in your OpenShift cluster. For more information, see the section “In OpenShift” in Licensing and metering. 

  1. Obtain the Helm chart. See Step 1 of  Installing a Helm release of ODM for production. (If you have not done so) 

  1. Run the following commands to create add and update ibm-helm repo (If you have not done so) 

helm repo add ibm-helm-repo $HELM_REPO 

helm repo update 

 

4.2 Installing the Production environment 

 

4.2.1. Production environment architecture and setup 

 

Une image contenant texte, nombre, Police, capture d’écran

Description générée automatiquement 

Production environment requirements and parameters summary (fig. 11) 

 

4.2.2. Production environment installation procedure 

 

When the preparations are done, you can proceed with the ODM deployment on OpenShift cluster using the Helm chart.  

 

  1. Create a namespace for your ODM Production environment. For example:  

oc new-project production 

  1. Create the secret my-odm-prod-secret-ldap for LDAP configuration where the webSecurity.xml can be one of the options described in Configuring user access without OpenID  
    Note:  You can customize the webSecurity.xml to add additional user with basic authentication if needed. 

oc create secret generic my-odm-prod-secret-ldap --from-file=webSecurity.xml=webSecurity.xml  

  1. Create the secret my-odm-prod-secret-db2-ssl containing the DB2 SSL certificate. For more information on how to generate the Db2 SSL certificate, see Configuring TLS support in a Db2 server using a self-signed certificate. An example to create the secret: 

oc create secret generic my-odm-prod-secret-db2-ssl --from-file="truststore.jks" --from-literal=truststore_password=password 

  1. Create the secret my-odm-prod-secret-db2 for holding Db2 credentials and the secret my-odm-prod-secret-ldap-cert to include LDAP SSL certificate:  

oc apply -f secret.yaml 

where the secret.yaml file contains: 

apiVersion: v1 

kind: Secret 

metadata: 

  name: my-odm-prod-secret-db2 

type: Opaque 

stringData: 

  db-user: <my_Db2User> 

  db-password: <my_Db2pass> 

 

--- 

kind: Secret 

apiVersion: v1 

metadata: 

  name: my-odm-prod-secret-ldap-cert 

data: 

  tls.crt: >- 

    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUdJVENDQlFtZ0F3SUJBZ0lE 

 

UblgyYXNpa2EweEgzZ1d1b1pqQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0= 

type: Opaque 

For related information about securing LDAP by SSL, see Configuring LDAP over SSL. 

 

  1. Customize the values.yaml file and specify the values of the parameters per ODM Production environment to install the chart. You can extract the values.yaml file from the Helm chart (ibm-odm-prod-version.tgz archive). 

 

Here is a sample of myvalues-production.yaml file allowing an ODM Production deployment with external Db2 and Active Directory LDAP in SSL. Note that the parameters customization.deployForProduction is set to true. 

customization: 

  runAsUser: "" 

  deployForProduction: true 

  # holding secret with LDAP connection credentials 

  authSecretRef: my-odm-prod-secret-ldap 

  # Specify a list of secrets that encapsulate certificates in PEM format to be included in the truststore 

  trustedCertificateList: 

    - my-odm-prod-secret-ldap-cert 

license: true 

serviceAccountName: '' 

service: 

  enableRoute: true 

externalDatabase: 

  type: db2 

  secretCredentials: my-odm-prod-secret-db2 

  databaseName: odmgoldp 

  serverName: my-db2-server-Name 

  sslSecretRef: my-odm-prod-secret-db2-ssl 

  port: '60001' 

image: 

  repository: cp.icr.io/cp/cp4a/odm 

decisionCenter: 

  enabled: false 

decisionServerRuntime: 

  enabled: true 

  extendRoleMapping: true 

  replicaCount: 3 

  resources: 

    limits: 

      cpu: '2' 

      memory: 2Gi 

    requests: 

      cpu: '2' 

      memory: 2Gi 

decisionRunner: 

  enabled: false 

decisionServerConsole: 

  extendRoleMapping: true 

  resources: 

    limits: 

      cpu: '2' 

      memory: 1Gi 

    requests: 

      cpu: 500m 

      memory: 512Mi 

To know more about the ODM parameters, see ODM for production configuration parameters. 

 

  1. Install the ODM Production environment in production namespace with the customized myvalues-production.yaml file using the following command: 

helm install ibm-odm-prod ibm-helm-repo/ibm-odm-prod -f myvalues-production.yaml -n production --version 23.2.0 

 

  1. To check the installation status, you can run the following commands: 

helm status ibm-odm-prod 
helm get values ibm-odm-prod 

 

  1. To get the Decision Console and Decision Runtime endpoints, you can run this command: 

oc get route 

 

5. Validate your ODM environments 

 Once everything is well configured and deployed, you can perform post installation tasks as described in Completing post-installation tasks.  

 

Coming soon:  We will provide you an article with additional validations at ODM level and dedicated to ODM on Certified Kubernetes on the same basis as this CP4BA one. Stay tuned! 

 

0 comments
5 views

Permalink