Message Image  

Deploying IBM Cloud Pak for Integration 2019.4 on OCP 4.2

 View Only
Tue July 28, 2020 06:05 AM

In this article we will learn how to install IBM Cloud Pak for Integration (CP4I) 2019.4 on Openshift Container Platform 4.2.

Prerequisites

Below are the prerequisites for installing IBM Cloud Pak for Integration 2019.4:

System Requirements (click link to open)

  1. Redhat Openshift Container Platform 4.2 on Linux® 64-bit
  2. CP4I common services and different integration capabilities have certain file system and storage requirements. File storage with ‘RWO + RWX’ and Block storage with RWO mode is required. Openshift Container Storage (OCS) can be deployed to provide both of these types of storage, which is backed by Ceph. You can follow the article below to deploy OCS on Openshift 4.2. This recipe assumes that both types of storage i.e. File (RWO + RWX) and Block (RWO) are available and respective storage classes have been configured on OCP.

    Deploying your storage backend using OpenShift Container Storage 4

  3. Other than OCP master and worker nodes, an infra node has been provisioned with a public IP address which has access to OCP cluster nodes and is allowed to access the deployed services from outside. We would use this node as a jump-box. You should have root level access on the jump box.
  4. Determine the size of your cluster keeping in mind:
    – The workload size you expect to run
    – The integration capabilities that you expect to run in High Availability or Single instance mode
    – The Common Services, Asset Repository and Operations Dashboard requirements
    – Scalability requirements

Note: this recipe is only to provide guidance for deploying CP4I 2019.4 on OCP 4.2. It does not cover the aspects for deploying the platform in production environment.

Step – by – step

Validate prerequisites and OCP cluster

Login to the infra node (or Boot node as the case may be) and check if the oc tool is installed. If the oc tool is not installed, follow the steps below:

In the OCP console, click on ‘Command line tools’


Click on ‘Download oc’


After downloading the file ‘oc.tar.gz’, extract it using the command below, give the appropriate permission and move to /usr/bin directory

tar xzvf oc.tar.gz
chmod 755 oc
mv oc /usr/bin

oc login --server=<OCP api server> -u <ocp admin user> -p <password>

You may also login by getting the login command with a generated token. To get the login command, login to the OCP console and click on ‘Copy login command’


Click on ‘Display Token’


Copy the login command with token


Login to OCP using this login command


By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic through TLS. Unlike previous versions of the OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation.

Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.

Run the command below in a single line to expose the OCP registry:

oc patch configs.imageregistry.operator.openshift.io/cluster 
   --patch '{"spec":{"defaultRoute":true}}' --type=merge

oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'

Use the command below to check if File and Block storage classes are available to use:

oc get sc


Run the command below to verify that all OCP nodes are in ‘Ready’ state

oc get nodes

Install docker on jump box and configure access to OCP registry

You need a version of Docker that is supported by OpenShift installed on your jump box / boot node. All versions of Docker that are supported by OpenShift are supported for the boot node. Only Docker is currently supported.

Run the steps below to install docker:

yum install docker -y
systemctl start docker
systemctl enable docker

Check the docker status

systemctl status docker

Navigate to /etc/docker/certs.d and create a folder with the same name as the external url of the registry. If ‘certs.d’ folder doesn’t exist, then create it. The name of the external url of the registry can be found using the command below;

oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'

mkdir default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com

Navigate inside this directory and run the command below in a single line to pull the certificate

ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts 
  -connect <external url for OCP registry>) -scq > ca.crt

For example:

ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts 
  -connect default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com:443) -scq > ca.crt

Restart the docker service.

Now validate that you are able to login to the OCP registry using the command below:

docker login <OCP registry url> -u $(oc whoami) -p $(oc whoami -t)

For example:

docker login default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com -u $(oc whoami) -p $(oc whoami -t)

Download the CP4I installable

The base product installation creates an instance of the Platform Navigator, along with the common services. All of the other components are optional and immediately available to install through the Platform Navigator. The entire product and all components run within a required Red Hat OpenShift Container Platform environment.

You have the following choices for installing IBM Cloud Pak for Integration. All downloads are available from IBM Passport Advantage.

  • Download the base product and all component packages. This method can be used in air-gapped environments.
  • Download the base product only. All other component packages reside in the online IBM Entitled Registry. Execute the installation procedures to install on a Red Hat OpenShift Container Platform. This method requires internet access but saves time.

Configure Cluster configuration file

Change to the installer_files/cluster/ directory. Place the cluster configuration files (admin.kubeconfig) in the installer_files/cluster/ directory. Rename the file kubeconfig. This file may reside in the setup directory used to create the cluster. If it is not available, you can log into the cluster as admin using the oc login then issue the following command.

oc config view --minify=true --flatten=true > kubeconfig

apiVersion: v1
clusters:
- cluster:
 insecure-skip-tls-verify: true
 server: https://api.prod3.os.fyre.ibm.com:6443
 name: api-prod3-os-fyre-ibm-com:6443
contexts:
- context:
 cluster: api-prod3-os-fyre-ibm-com:6443
 namespace: default
 user: admin/api-prod3-os-fyre-ibm-com:6443
 name: default/api-prod3-os-fyre-ibm-com:6443/admin
current-context: default/api-prod3-os-fyre-ibm-com:6443/admin
kind: Config
preferences: {}
users:
- name: admin/api-prod3-os-fyre-ibm-com:6443
 user:
 token: klI928FXCt-0Va8lI2h7VFLN_mwCbyIuaQa_lJ_mM8M

Configure installation environment

Extract the contents of the archive with a command similar to the following.

tar xzvf ibm-cp-int-2019.4.x-offline.tar.gz

tar xvf installer_files/cluster/images/common-services-armonk-x86_64.tar.gz -O|docker load

Configure your cluster

You need to configure your cluster by modifying the installer_files/cluster/config.yaml file. You can use your OpenShift master and infrastructure nodes here, or install these components to dedicated OpenShift compute nodes. You can specify more than one node for each type to build a high availability cluster. After using oc login, use the command oc get nodes to obtain these values. Note that you would likely want to use a worker node.

Open the config.yaml in an editor.

vi config.yaml

Update the below sections in config.yaml. Below is an example:

cluster_nodes:
 master:
 - worker3.prod3.os.fyre.ibm.com
 proxy:
 - worker4.prod3.os.fyre.ibm.com
 management:
 - worker4.prod3.os.fyre.ibm.com

Specify the Storage Class. You can specify separate storage class for storing log data. Below is an example:

# This storage class is used to store persistent data for the common services
# components
storage_class: rook-ceph-cephfs-internal

## You can set a different storage class for storing log data.
## By default it will use the value of storage_class.
# elasticsearch_storage_class:

Specify password for admin user and also specify password rule, e.g.

default_admin_password: admin
password_rules:
# - '^([a-zA-Z0-9\-]{32,})$'
- '(.*)'

Leave rest of the file unchanged unless you want to change the namespaces for respective integration capabilities. Save the file.

The value of the master, proxy, and management parameters is an array and can have multiple nodes. Due to a limitation from OpenShift, if you want to deploy on any master or infrastructure node, you must label the node as an OpenShift compute node with the following command:

oc label node <master node host name/infrastructure node host name> node-role.kubernetes.io/compute=true

Install CP4I

Once preparation completes, run the installation command from the same directory containing the config.yaml file. You can use the command docker images | grep inception to see the value used to install.

Run the command below in single line

sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v 
/var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable 
ibmcom/icp-inception-amd64:3.2.2 addon

If you are deploying as root user, you would run this command without ‘sudo’.

This process transfers the product packages from the boot node to the cluster registry. This can take several hours to complete.

Once installation is complete, Platform navigator will be available at the below endpoint:

https://ibm-icp4i-prod-integration.<openshift apps domain>/

You can use the command below to get the OCP apps domain:

oc -n openshift-console get route console -o jsonpath='{.spec.host}'| cut -f 2- -d "."

You can navigate to the Openshift console and Cloud Pak foundation by clicking on the hamburger menu


Note that if you want to use the ‘Operations Dashboard’ for your integration components, you should first provision the ‘Operations Dashboard’ instance so that you can refer to it while creating an instance of an integration capability.

Conclusion

In this article we have learnt the installation steps for IBM Cloud Pak for Integration on OCP 4.2.

3 comments on"Deploying IBM Cloud Pak for Integration 2019.4 on OCP 4.2"

  1. longnguk May 30, 2020

    Excellent article, very easy to follow. However, my installation (CP4I 2020.1.1) fails at the Configuring cloudctl after 5 attempts. Please do share any suggestions debugging/resolving the issue. My vSphere OCP 4.3 consists of 3 masters, 3 workers and I am using vSphere storage (instead of Ceph).
    Thank you,

    FAILED – RETRYING: Configuring cloudctl (1 retries left).
    Result was: changed=true
    attempts: 5
    cmd: bash /tmp/config-cloudctl-script
    delta: ‘0:00:00.979458’
    end: ‘2020-05-29 17:06:44.202876’
    invocation:
    module_args:
    _raw_params: bash /tmp/config-cloudctl-script
    _uses_shell: true
    argv: null
    chdir: null
    creates: null
    executable: /bin/bash
    removes: null
    stdin: null
    warn: false
    msg: non-zero return code
    rc: 1
    retries: 6
    start: ‘2020-05-29 17:06:43.223418’
    stderr: ”
    stderr_lines:
    stdout: |-
    Authenticating…
    Get https://icp-console.apps.openshift4.dlnlab.dln:443/idmgmt/identity/api/v1/teams/resources?resourceType=namespace: EOF
    FAILED
    Set ‘CLOUDCTL_TRACE=true’ for details
    stdout_lines:

    Reply (Edit)
  2. Deepak May 13, 2020

    Can we install CP4I on HP X86 Blade servers.

    Reply (Edit)
    • Anand.Awasthi May 21, 2020

      Hi Deepak,
      CP4I only depends on OCP as underlying infrastructure is abstracted from it. CP4I can be installed on wherever OCP can be installed.
      CP4I 2020.1.1 requires OCP 4.3 or OCP 4.2
      CP4I 2019.4 required OCP 4.2
      Thanks.

      Reply (Edit)

#IBMCloudPakforIntegration(ICP4I)
#Openshift
#OCP