Public Cloud Global

Installing IBM Cloud Paks on IBM Cloud Satellite - AWS (Part 2)

By Pam Andrejko posted Tue September 07, 2021 01:34 PM

  

Installing IBM Cloud Paks on IBM Cloud Satellite - AWS (Part 2)

By: Sundari Voruganti and Pam Andrejko

After you configure a Satellite Location in IBM Cloud and deploy an OpenShift cluster, you are ready to configure storage. Part 2 of this blog series demonstrates how to install OpenShift Data Foundation (ODF) storage on the OpenShift cluster that you created in Part 1.

Configure storage

Note: These steps describe how to install ODF on OCP 4.7, although the process is identical if you need to install OpenShift Container Storage (OCS) on OCP 4.6.

We will complete the following steps:

  1. Create two volumes for each AWS node and attach them to the node.
  2. Get the device details for your storage configuration and install ODF.
  3. Install the Cloud Pak.

Create two volumes for each node in AWS

ODF requires two volumes on each worker node.

Important: The volumes need to be created in the same zone as the EC2 Instance that you will to attach them to. To find the availability zone where the instance resides, open each EC2 instance and locate its availability zone. You will need to create the volumes for the instance in that zone.

Figure 1. How to determine the availability zone of an EC2 instance.

Create two volumes per EC2 Instance. To work with the Cloud Paks, we created 100 GiB and 500 GiB volumes.

Note: This step is only required for EC2 instances that serve as worker nodes and is not required for control plane nodes.Figure 2. Create two storage volume in AWS, 100GiB, 500GiB


Figure 3. Two storage volumes, 100 GiB, 500GiB.


After the two volumes are created, attach them to their corresponding EC2 instance:


Figure 5. Attach volume to EC2 instance.


Search for the EC2 Instance that you want to attach the volume to. As you can see in the screenshot below, only the instances in the associated zone are displayed. Pick your instance and click Attach.

Figure 6. Select which EC2 instance to attach the volume to.

Get the device details for your storage configuration and install ODF

Now that the storage volumes are configured on the worker nodes on AWS, you can install ODF. The overall process consists of:

  • Creating a satellite storage configuration that contains the paths to the storage volumes on AWS.
  • Creating a satellite storage assignment which uses the storage configuration.

This process is described in the IBM Cloud Satellite documentation or watch this video for a demonstration of the process:

A few tips on the documented process:

An example of the configuration command that we used is provided here:

ibmcloud sat storage config create --name ocs-template4 --location ps-aws-satellite-9 --template-name odf-local --template-version 4.7 -p "ocs-cluster-name=ocs-cluster" -p "osd-device-path=/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0a34caf044475b9fe,/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0652c0e01029bd0a1,/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol00a69cac4e27a106a" -p "mon-device-path=/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol00e6dca0aa2e39412,/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol02bd790b9ba37cba1,/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol07a93345c1d4549e7" -p "num-of-osd=1" -p "worker-nodes=ip-10-0-1-203.us-east-2.compute.internal,ip-10-0-2-19.us-east-2.compute.internal,ip-10-0-3-17.us-east-2.compute.internal"

We used the template-name odf-local because our EC2 instances use local storage. And since our cluster was OCP 4.7, we used template-version 4.7. For our three worker nodes, we plugged in the osd-device-path and mon-device-path for each node. If you are using all of your worker nodes, you do not have to specify the worker-nodes parameter but we provide it here for clarity.

An example of the storage assignment command we used is:

ibmcloud sat storage assignment create --cluster c4ojhf4w0st5udsp5fbg --config ocs-config1 --name odf-storage-1

After you run this command, it will take a few minutes for the ODF cluster to be ready. Verify it was successful by running the command:

oc get csv -n openshift-storage


When the cluster is ready you should see something similar to:

NAME                          DISPLAY             VERSION  REPLACES             PHASE
ocs-operator.v4.7.3  OpenShift Container Storage  4.7.3    ocs operator.v4.7.2  Succeeded

 

Then the available storage classes can be viewed by running the following command:

oc get sc

                                          

NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION

localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                 

localfile                     kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                 

ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                  

ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                 

ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                  

openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                 

sat-ocs-cephfs-gold           openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true

sat-ocs-cephrbd-gold          openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true

sat-ocs-cephrgw-gold          openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false 

sat-ocs-noobaa-gold           openshift-storage.noobaa.io/obc         Delete          Immediate              false


The OCS cluster is also visible in the OpenShift web console. From the openshift-storage project, navigate to  Operators > Installed Operators > OpenShift Container Storage.


Figure 6. ODF Storage Cluster in OpenShift web console.

Install Cloud Paks





Now that we have configured storage on our cluster on our satellite location, we can install a Cloud Pak. When installing Cloud Paks on a Satellite cluster, you can use the same instructions to install the Cloud Pak that you would use if your OpenShift cluster was running as a managed service in IBM Cloud.

For example, for Cloud Pak for Integration, simply follow the Express Install instructions to:

  • Install the Operator.
  • Create a secret with your entitlement key.
  • Deploy Platform Navigator using the storage class ocs-storagecluster-cephfs.
  • Then, using Platform Navigator, you can deploy the Cloud Pak for Integration services such as MQ, API Connect, and App Connect Enterprise, as usual. The storage classes that you created when you installed ODF can be used by the Cloud Pak for Integration services.

Adding additional clusters to your Satellite location

A single IBM Cloud Satellite location can support multiple OpenShift clusters. When you add a second cluster to your location, additional control plane nodes may be required depending on the size of the existing control plane nodes and how many OpenShift clusters you plan  to deploy. For more information see Adding capacity to your Satellite location control plane.

Before you create another cluster, additional hosts must be available and be "unassigned" on your Satellite location in order for the cluster to fully provision. Simply repeat the steps in this blog series to stand up a second OpenShift cluster. The following screenshot shows two multizone clusters in our single Satellite location.


Figure 7. Satellite location with two OpenShift clusters

Summary

This blog series demonstrated how to install a Cloud Pak on a Satellite location using AWS infrastructure. It then walked you through the steps to configure storage that was used to install Cloud Pak for Integration.

#Openshift
#cloudpaks
 #ibm-cloud-satellite

 #AWS #featured-home-2

​​​​​​
0 comments
46 views

Permalink