IBM Cloud Global

 View Only

Installing IBM Cloud Paks on IBM Cloud Satellite - Azure

By Pam Andrejko posted Fri October 01, 2021 12:39 PM

  

Installing IBM Cloud Paks on IBM Cloud Satellite  - Azure

By: Pam Andrejko  and Sundari Voruganti

IBM Cloud Satellite enables businesses to run OpenShift as a managed service on their own infrastructure, in other cloud providers such as AWS or Azure, or edge environments. Adding the Cloud Paks to a Satellite location allows businesses to bring a secured unifying layer of services, such as MQ manager and App Connect integration servers, close to where the business applications reside and the data is stored, ensuring the same security and controls are used no matter where data is being collected, processed or shared.

This blog walks you through the steps to install a Cloud Pak on a Satellite location that uses Azure infrastructure. As described above, customers can then use Cloud Paks to build models where their data resides without privacy concerns.

If you want to see how the process works on AWS, see Installing IBM Cloud Paks on IBM Cloud Satellite - AWS.

At a high level, you can use the following steps to install a Cloud Pak on a Satellite location:

Create a Satellite location

Before you begin this process, you will need the following information:

  • Azure client ID, tenant ID, and secret key
  • Size of nodes to provision (CPU and RAM)
  • Number of nodes required

Azure credentials

We begin by creating a new Satellite Location in IBM Cloud using Azure infrastructure. In the left navigation menu, select Satellite > Locations > Create location. When you select the Azure tile you will need to provide your Azure client ID, tenant ID, and secret key and then click Fetch options from Azure.


Size of nodes to provision (CPU and RAM)

Next you need to specify the number and size of the Azure VMs for your Satellite Location.

Plan for a minimum of three control plane nodes plus the number of worker nodes required by the Cloud Pak. Azure automatically distributes the nodes across three zones by default. If you provision six nodes across three zones, one node per zone is automatically assigned to the control plane. This pattern ensures high availability and reliability for Cloud Paks that take advantage of this setup (for example, Cloud Pak for Integration).

Tip: A single IBM Cloud Satellite location can support multiple OpenShift clusters. See the topic Adding capacity to your Satellite location control plane  to determine if additional control plane nodes are required when you add more clusters to your location. For planning purposes, adding the extra nodes when you create the satellite location makes it easier later when you want to add another cluster.

For the size of the nodes, we chose 16CPU x 64GiB nodes for our Location. (This screen shot shows how you can add 6 worker nodes for a total of 9 nodes, but for this tutorial we in fact only added 3 worker nodes.)

After you click Create, the Virtual machines are provisioned on Azure. This can take a while and you’ll know the hosts are ready when the Satellite Location status is Normal:

The status of the associated control plane Hosts is also Normal or Ready.

For more details on this overall process see Getting started with IBM Cloud Satellite.

Create an OpenShift cluster at the location

Deploy the cluster

If you have created an OpenShift cluster on IBM Cloud before, the process is largely the same. The major difference is you need to select Satellite as your infrastructure and then select the Satellite Location you just created.

In the Satellite > Locations page for your location, click  Getting Started > Create Cluster at the bottom of the page. Select Satellite as the infrastructure and then select your satellite Location.


Under Worker Pools, select the size of the nodes for your cluster and which zone they reside in. Some Cloud Paks (like Cloud Pak for Integration) support multizone clusters.


When the cluster is provisioned, it will use the number of nodes you specified from each selected zone as worker nodes. As the note suggests, ensure you have adequate compute available before creating the cluster.

Finally, click Enable cluster admin access for Satellite Config  which ensures that all of the Satellite Config components will work, and then give your cluster a name.

The cluster is ready when it shows as Normal in the Kubernetes Clusters page.

Configure the cluster for public log in

When IBM Cloud Satellite provisions nodes on Azure, they only contain a private IP address. In order to configure storage on our OpenShift cluster we need to be able to log into the web console, which requires either VPN access or the nodes must have public IP addresses. For instructions on how to add a public IP address to an Azure node, follow instructions in the Azure documentation or watch this video:

Then in order to log in to the OpenShift web console, you need to update the IP addresses in the DNS and cluster subdomain. From the Azure portal Home, open each VM and record the public and private IP addresses of your control plane and worker nodes:

Once  you have the list of IP addresses, you can use the IBM Cloud CLI to run a series of commands  to:

 

  1. Register the public IP addresses of the control plane nodes with your Satellite location DNS.
  2. Add the public IP addresses of worker nodes to your cluster's subdomain.
  3. Remove the private IP addresses of worker nodes from your cluster's subdomain.

 Refer to the IBM Cloud Satellite documentation for the detailed commands or watch this video (the steps are identical for AWS or Azure nodes):



After completing these steps, browse to your cluster in IBM Cloud, click on it, and then click  OpenShift web console.

You’ve now successfully configured a Satellite Location in IBM Cloud using Azure  infrastructure and deployed an OpenShift cluster. Before you can install a Cloud Pak, you need to configure storage for your cluster.



Configure storage



Tip:
These steps describe how to install ODF on OCP 4.7, although the process is identical if you need to install OpenShift Container Storage (OCS) on OCP 4.6.

We will complete the following steps:

  1. Create and attach two disk volumes for each Azure worker node.
  2. Get the device details for your storage configuration, install ODF, and create a storage cluster.
  3. Install the Cloud Pak.

Create two volumes for each worker node in Azure

ODF requires two volumes on each worker node. From the Azure portal home, click Storage accounts to create an account in the same region where the VMs reside. 

Now that you have a storage account, you can create and attach disks to your nodes. Note, this process only needs to be performed for worker nodes on the cluster and not control plane nodes.

  1. From Azure portal Home, click on Virtual machines.
  2. Search for the VM that you want to add storage to and open it.
  3. Click Disks in the left navigation and then click Create and attach a new disk.
  4. Select a new LUN id for the disk and give the disk a name.
  5. Specify the size a 100GiB, and for Host caching specify Read/write.
  6. Now repeat these steps to add a second 500 GiB disk to the node.
  7. Repeat these steps for each worker node in your OpenShift cluster.
Watch video:

Get the device details for your storage configuration and install ODF

Now that the storage volumes are configured on the worker nodes on Azure, you can install ODF. The overall process consists of:

  • Creating a satellite storage configuration that contains the paths to the storage volumes on Azure.
  • Creating a satellite storage assignment which uses the storage configuration.

This process is described in the IBM Cloud Satellite documentation or watch this video for a demonstration of the process. (The video refers to AWS infrastructure, but the steps are identical for Azure.)

Update:
Since this blog was written, an enhancement was made to the ODF storage with the addition of a new parameter auto-discover-devices=true. When this parameter is specified, you no longer need to provide the mon-device-path or osd-device-path parameters when you create the satellite storage configuration.  The following steps will still work, but tip 1  below is no longer required.






Tips:
A few tips on the documented process:

1.  When you run the following command on the Azure node to get the storage path:
ls -l /dev/disk/by-id/

you may see multiple paths for the mount point. Use the scsi paths, for example your may resemble the following path:

/dev/disk/by-id/scsi-360022480a8f7082488910c913af65180

when you create the storage configuration.

2. An example of the configuration command that we used is provided here:

ibmcloud sat storage config create --name odf-config-azure --location ps-azure-centralus --template-name odf-local --template-version 4.7 -p "ocs-cluster-name=ps-azure-satellite" -p "osd-device-path=/dev/disk/by-id/scsi-36002248072f0f3b4480d09495538391f,/dev/disk/by-id/scsi-360022480121ab2aa67d4cd8db51ac701,/dev/disk/by-id/scsi-36002248066a64306648e616c14395c2f" -p "mon-device-path=/dev/disk/by-id/scsi-360022480a8f7082488910c913af65180,/dev/disk/by-id/scsi-3600224807ed5fc290c5adb9765e702d0,/dev/disk/by-id/scsi-36002248051a803f21b90b7357512e5c8" -p "num-of-osd=1"

We used the template-name odf-local because our Azure nodes use local storage. And since our cluster was OCP 4.7, we used template-version 4.7. For our three worker nodes, we plugged in the osd-device-path and mon-device-path for each node.

Updated command using the ODF 4.8 template:

ibmcloud sat storage config create --name odf-config-azure --template-name odf-local --template-version 4.8 --location ps-azure-central-us -p "ocs-cluster-name=ps-azure-satellite" -p "auto-discover-devices=true" -p "iam-api-key=<my_key>" 


In this updated version of the command, you also need to provide your IBM Cloud API key "iam-api-key".
3. An example of the storage assignment command we used is:

ibmcloud sat storage assignment create --cluster c50d6oad0v682ff3llug --config odf-config-azure --name odf-azure-assignment

 

After you run this command, it will take a few minutes for the ODF cluster to be ready. Verify it was successful by running the command:

oc get csv -n openshift-storage


4. When the cluster is ready you should see something similar to:

NAME                          DISPLAY             VERSION  REPLACES             PHASE
ocs-operator.v4.7.3  OpenShift Container Storage  4.7.3    ocs operator.v4.7.2  Succeeded

 

Then the available storage classes can be viewed by running the following command:

oc get sc

                                          

NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION

localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false   

localfile                     kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                 

ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                  

ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                 

ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                  

openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                 

sat-ocs-cephfs-gold           openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true

sat-ocs-cephrbd-gold          openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true

sat-ocs-cephrgw-gold          openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false 

sat-ocs-noobaa-gold          openshift-storage.noobaa.io/obc         Delete          Immediate              false


The OCS cluster is also visible in the OpenShift web console. From the openshift-storage project, navigate to  Operators > Installed Operators > OpenShift Container Storage.

Install Cloud Paks





Now that we have configured storage on our cluster on our satellite location, we can install a Cloud Pak. When installing Cloud Paks on a Satellite cluster, you can use the same instructions to install the Cloud Pak that you would use if your OpenShift cluster was running as a managed service in IBM Cloud.

For example, for Cloud Pak for Integration, simply follow the Express Install instructions to:

  • Install the Operator.
  • Create a secret with your entitlement key.
  • Deploy Platform Navigator using the storage class ocs-storagecluster-cephfs.
  • Then, using Platform Navigator, you can deploy the Cloud Pak for Integration services such as MQ, API Connect, and App Connect Enterprise, as usual. The storage classes that you created when you installed ODF can be used by the Cloud Pak for Integration services.

Adding additional clusters to your Satellite location

A single IBM Cloud Satellite location can support multiple OpenShift clusters. When you add a second cluster to your location, additional control plane nodes may be required depending on the size of the existing control plane nodes and how many OpenShift clusters you plan  to deploy. For more information see Adding capacity to your Satellite location control plane.

Before you create another cluster, additional hosts must be available and be "unassigned" on your Satellite location in order for the cluster to fully provision. Simply repeat the steps in this blog series to stand up a second OpenShift cluster. The following screenshot shows two multizone clusters in our single Satellite location.


Summary

This blog series demonstrated how to install a Cloud Pak on a Satellite location using Azure infrastructure. It then walked you through the steps to configure storage that was used to install Cloud Pak for Integration.

#Openshift
#cloudpaks
 #ibm-cloud-satellite
#azure

​​​
#Featured-area-1
#Featured-area-1-home
0 comments
735 views

Permalink