-
Introduction
IBM Cloud Pak for Integration (ICP4I) helps you connect anything using industry-leading capabilities with the most comprehensive integration platform on the market. In this recipe we will cover the following:
1. Provision the infrastructure required to satisfy the minimum system requirements to install ICP4I on Redhat openshift.
2. Install Redhat Openshift container platform (OCP) on the provisioned infrastructure/VMs.
3. Install ICP4I v2019.3.2 on the provisioned Openshift cluster.
4. Uninstallation of ICP4I
5. Destroy the the environment – Openshift cluster as well as infrastructure
It will take approx 5-6 hrs to complete steps 1-3 above.
-
System Requirements
Review the Minimum system requirements for ICP4I and based on the capabilities you want to deploy, you can come up with the number of nodes in the cluster and their configuration.
In this recipe we will provision the following configuration which is good to deploy all capabilities (disk storgae can be expanded post install).
-
Provision the infrastructure
We will use Lesson 1 and 2 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat to provision the infrastructure on IBM cloud.
- Take a VM anywhere (local desktop or cloud), with any operating system and install docker on this VM. Let’s call this as jump server in this recipe.
- After docker is installed Run “docker pull ibmterraform/terraform-provider-ibm-docker”
- Run “docker run -it ibmterraform/terraform-provider-ibm-docker:latest”
- Run “apk add –no-cache openssh”
- Run “git clone https://github.com/IBM-Cloud/terraform-ibm-openshift.git”
- Run “cd terraform-ibm-openshift”
- Run “ssh-keygen -t rsa -b 4096 -C “test123@gmail.com”
- vi variables.tf and use sample variables.tf but update the following values
- Retrieve your IBM Cloud classic infrastructure user name and API key as this will be prompted later.
- Run “make rhn_username=<your_rhn_username> rhn_password=<your_rhn_password> infrastructure” This step will take approx 40 mins to complete.
After step 10, you should see the required VMs created in the IBM cloud classic infrastructure. Ensure the following before moving forward:
- Login to Bastion node and validate the ‘manage_etc_hosts’ flag is False in the file /etc/cloud/cloud.cfg. If this is True, then change this to False, save and exit. Note that this setting exists when VM is provisioned with disk size >25GB.
- As per https://bugzilla.redhat.com/show_bug.cgi?id=1749024, for RHEL 7.7, kernal version should be kernel-3.10.0-1062.el7 for OCP install. Since the above VMs are configured with RHEL 7.7 we need to ensure the kernal version is appropriate for OCP install.
- Run “uname -r” to validate the kernal version on bastion node, if it doesn’t match “3.10.0-1062.x” then execute “yum update” on bastion node
- Use this step as a workaround and perform yum update followed by reboot on all nodes if you hit the following error (The installed kernel version does not meet the required minimum for RHEL 7.7) during Openshift install:
-
- Ensure network manager settings as follows, this is to ensure Network manager configuration is intact post VM restart to avoid any issues
- Open /etc/sysconfig/network-scripts/ifcfg-eth0 to ensure the settings are similar as below
- Open /etc/sysconfig/network-scripts/ifcfg-eth1 to ensure the settings are similar as below
- Also, set NM_Controlled to yes (if exists)
- Reboot bastion node
- Repeat steps 1-4 on each node.
-
Deploy Redhat openshift container platform
Now we have the infrastructure ready for the deployment of OCP. We will use Lesson 3 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat for this step.
- Login to bastion node
- Run “subscription-manager unregister”
- Run “rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release”
- Run “subscription-manager register –serverurl subscription.rhsm.redhat.com:443/subscription –baseurl cdn.redhat.com –username <your_redhat_username> –password <your_redhat_password>”
- Run “subscription-manager list –available –matches ‘*OpenShift Container Platform*'” and note down the Pool ID.
- Exit from bastion node
- Login to jump server and locate the docker container used to provision the infrastructure.
- Run “docker ps” to find the container id
- Run “docker exec -it <container id> bash” to get inside the container
- Run “cd terraform-ibm-openshift”
- Run “make rhn_username=<your_rhn_username> rhn_password=<your_rhn_password> pool_id=<pool_ID> rhnregister” this will take approx 10 mins
- Run “make openshift” this will take approx 2 hrs.
- After successful completion of step 5, OCP cluster should be up and running
- To access the cluster, in your local /etc/hosts file make an entry as follows
-
Set up users and authentication for your OpenShift cluster
By default the above OCP install uses HTPasswd as identity provider. It also creates a user ‘admin’ with password ‘test123’. In this step we will assign cluster administrator role to this user.
- Login to master node
- Run “oc login -u system:admin”
- Run “oc adm policy add-cluster-role-to-user cluster-admin admin”
- Now access the Openshift console https://master_public_ip:8443/console and login with user ‘admin’ and password ‘test123’ and make sure you are able to login and browse through the UI without any issues
- You can run through Lesson 4 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat to ensure everything is working fine.
-
Preparing for IBM Cloud pak for Integration installation
Now that we have OCP cluster with desired configuration ready, we need to prepare for the installation of ICP4I.
Expand the disk storage
We have provisioned each node with 100GB boot disk space but for ICP4I install we need ~120GB so to meet this pre-req, we need to add additional storage. Follow these steps to add a 150GB SAN disk to the master node.
- Login to IBM Cloud Classic infrastructure and resize the master node to add 150GB SAN disk to it.
- Login to master node (ssh) and make sure the disk is available by running “lsblk”. Output should show the new disk as xvdc 202:32 0 150G 0 disk
-
Perform disk partition as mentioned in the link https://codingbee.net/rhcsa/rhcsa-creating-partitions
- fdisk /dev/xvdc
- Add a new partition
- write table to disk and exit
- Run “lsblk” to ensure output shows as
- xvdc 202:32 0 150G 0 disk
└─xvdc1 202:33 0 150G 0 part
- Run “mkfs.xfs /dev/xvdc1”
- Run “partprobe”
- Create a new directory under /var e.g. /var/app
- Mount the disk to the new dir, Run “mount -o defaults,noatime /dev/xvdc1 /var/app”
Now we have added additional 150GB disk storage to master node.
Prepare the nodes
Prepare the nodes before starting the installation as follows, refer to Installing IBM Cloud Pak for Integration on OCP 3.11 for more details:
- Label the master node as compute, Run “sudo kubectl label nodes <OCP Master node> node-role.kubernetes.io/compute=true”
- On each node, set vm.max_map_count to 1048575 (if you want to install API Connect)
- Run “sudo sysctl -w vm.max_map_count=1048575”
- Run “echo “vm.max_map_count=1048575″ | sudo tee -a /etc/sysctl.conf”
-
Install IBM cloud pak for Integration
Now we are good to proceed with installation of ICP4I. Refer to Installing IBM Cloud Pak for Integration on OCP 3.11 for details on each of the steps below.
- Login to master node
- Run “cd /var/app” and download IBM Cloud Pak for Integration for Openshift v2019.3.2 installable from Passport Advantage (PPA) under this dir.
- Run “tar xvf <archive_name>” e.g. tar xvf ibm-cloud-pak-for-integration-x86_64-2019.3.2-for-OpenShift.tar.gz. It will create a folder ‘installer_files’ and will extract the artifacts inside it.
- Run “oc get sc” and take a note of the storage class name
- Take a note of subdomain, Run “kubectl -n openshift-console get route console -o jsonpath='{.spec.host}’| cut -f 2- -d “.”” and copy the output.
- Run “oc get nodes” and take note of the name of the nodes
- Run “cd installer_files”
- Run “sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig”
- Run “cd cluster/images”
- Run “tar xf ibm-cloud-private-rhos-3.2.0.1907.tar.gz -O | sudo docker load” – this will take approx 30mins
- Configure the config.yaml file under installer_files/cluster folder and use sample config.yaml file for reference and update as outlined below and make sure node names here should match the node names produced by the “oc get nodes” command.
- Make sure you are in installer_files/cluster folder
- Run “sudo docker run -t –net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z –security-opt label:disable ibmcom/icp-inception-amd64:3.2.0.1907-rhel-ee install-with-openshift” this will take 2 hrs to complete, so take a break!!
-
Verify the installation of IBM Cloud pak for integration
After installation is completed successfully, verify the installation.
-
Uninstall IBM Cloud pak for Integration
To uninstall IBM Cloud pak for integration follow the below steps:
- Login to master node
- Run “sudo docker run –privileged -ti –net=host -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception-amd64:3.2.0.1907-rhel-ee uninstall-with-openshift”
- Restart Docker on each node, run “service docker restart”
- Remove the additional labels applied to nodes
-
kubectl label node <master-node> node-role.kubernetes.io/icp-master-
kubectl label node <proxy-node> node-role.kubernetes.io/icp-proxy-
kubectl label node <management-node> node-role.kubernetes.io/icp-management-
- Restart all nodes in the cluster
-
Destroy the environment - Cluster as well as VMs
- Login to jump server and get inside the docker container as outlined in step 4.7 in this recipe
- Run “make destroy” – this will take approx 1hr
-
Conclusion
In this recipe we covered:
- Provision infrastructure on IBM Cloud
- Install Openshift cluster on the provisioned infrastructure
- Install IBM Cloud pak for Integration on Openshift cluster
- Uninstall IBM Cloud pak for Integration
- Uninstall Openshift cluster and infrastructure
References
https://cloud.ibm.com/docs/terraform?topic=terraform-redhat
https://github.com/JyotiRani/Cloud-pak-for-Integration-examples
https://community.ibm.com/community/user/integration/viewdocument/installing-ibm-cloud-pak-for-integr