Hybrid Cloud with IBM Z - Group home

How to manage persistent storage of multiple datacenters in IBM Cloud Infrastructure Center 1.1.3

  

Abstract

IBM® Cloud Infrastructure Center is an Infrastructure-as-a-Service (IaaS) offering on IBM Z® and IBM LinuxONE platforms. The blog provides an example to manage Fibre-Channel based persistent storage provider in multiple datacenters with IBM Cloud Infrastructure Center.
Written by Dong Yan Yang.

Objective

In a multiple datacenter scenario, "availability zone" can be used in IBM Cloud Infrastructure Center to divide the hosts and storage providers into different availability zones. With that, and after the “cross_az_attach” is disabled, IBM Cloud Infrastructure Center UI automatically filters that the virtual machine, its root volume, and data volume can only come from the same availability zone. 

Cloud Infrastructure Center 1.1.3 supports IBM z/VM® and Red Hat® KVM as hypervisors, this blog uses the z/VM hypervisor as an example. The workflow is similar for the KVM hypervisor, except that the "Boot From Volume" function is not supported for KVM in Cloud Infrastructure Center 1.1.3.

For the storage related concepts in Cloud Infrastructure Center refer to: "Storage concept and examples in IBM Cloud Infrastructure Center (z/VM related) "

Terminologies

Storage Provider

The storage system that provides persistent storage for virtual machines that can be managed in Cloud Infrastructure Center as storage provider. 

For the supported storage systems refer to "Planning for persistent storage".

Agent Node of storage provider

The node where the storage agent service of the Cloud Infrastructure Center runs on. It can be either the management or compute node.

Availability Zone (AZ)

A way to create logical groupings of hosts. For more information, see Availability Zones.

For more information about the usage of AZ refer to "Planning for Availability Zone"

Environment

Figure 1 below illustrates the topology of the environment used in this blog:

  • Two datacenters: POK and MOP.
  • One LPAR in each datacenter to be managed by Cloud Infrastructure Center.
  • The Cloud Infrastructure Center management node is in POK.
  • One compute node is in each datacenter to manage the corresponding LPAR.
  • One IBM FlashSystem® solution in each site.
  • TCP/IP connections
    • between management node and the two compute nodes
    • between management node and the two storage providers
  •  Fibre Channel (FC) connections 
    • in each datacenter: between the LPAR and the corresponding storage provider
    • no cross-datacenter FC connection 
Figure 1. Environment topology

Steps

  1. The management node is installed in the POK datacenter with Cloud Infrastructure Center 1.1.3. 
    Refer to the IBM docs for the Cloud Infrastructure Center installation process.
  2. Add host (also known as "compute node")
With the topology showed in Figure 1, there is one compute node in each datacenter, which need to be added into Cloud Infrastructure Center one by one. For the two hosts, we add them as "POK_host" for the host in POK datacenter and as "MOP_host" for the MOP datacenter.
Figure 2 shows the panel "Add Host" and the required information, using the POK_host as an example. 
Note: If you want to attach a data volume to your virtual machine or use the “Boot From Volume” function to deploy a virtual machine with persistent storage as root volume, the “FCP List” is required when adding the host.

Figure 2. Add Host

  1. Create host group for each datacenter
Usually there is no FC connection across datacenters, but in our scenario, we need some isolation mechanism in Cloud Infrastructure Center to avoid that a volume from one datacenter can be attached to a virtual machine in another datacenter. The isolation mechanisms we use are the "availability zone" and "host group", putting the compute node and the storage provider of the same datacenter into the same availability zone and create a different availability zone for the other datacenter. Thus, when attaching a data volume to a virtual machine or deploying a virtual machine with volume as root disk, the "availability zone" will prevent a cross site volume attachment.
For more details on the concepts of "availability zone" and "host group" refer to "Sample config for availability zone and host group".
Next, we need to create the host group and divide the host into different availability zones.

1). After the two hosts have been added, these two hosts are put into the host group named "Default Group", shown in Figure 3. 
Figure 3. Default host group after host has been added

2) Create one host group for each datacenter with a different "availability zone" value
  • Use the "Create" button in the "Hosts" -> "Host Groups" panel to create a host group for the POK datacenter, using "group_POK" as the host group name and "AZ_POK" as the AZ name.
Figure 4. Create host group and add member for the POK datacenter

  • Use the "Create" button in the "Hosts" -> "Host Groups" panel to create a host group for the MOP datacenter, using "group_MOP" as the host group name and "AZ_MOP" as the AZ name.
Figure 5. Create host group and add member for the MOP datacenter

  • After the two host groups are created, you can see in figure 6 that there is one host in each of the new created host group.
Figure 6. Host groups and host relationship

  1. Add storage provider of each site into the management of IBM Cloud Infrastructure Center
In this step, one host group is created for each datacenter with a new availability zone. To use the storage provider support, we need to add the storage provider of each site into the corresponding availability zone. 
As shown in Figure 7, when adding the storage provider of the POK datacenter, select the POK_host as the Agent Node, and then, there will be only one available zone for you to choose, the one of the POK_host.  
The storage provider of the MOP datacenter should be added in the similar steps, with MOP_host as the Agent Node and AZ_MOP as the Availability zone.
Figure 7. Add Storage Provider

After the storage providers have been added, they show up on different agent nodes and different availability zones show up, as shown in figure 8:
Figure 8. Storage Provider list

  1. Disable “cross_az_attach”
By default, the “cross_az_attach” setting is enabled in Cloud Infrastructure Center. We need to disable it, so that the volume attach can use the AZ attribute to filter and isolate the entries. For more details on this setting refer to IBM document on configuring cross-az-attach setting.
To enable it, login the management node and issue the following command:
[root@cic113m ~]#  icic-config storage cross-az-attach –disable
Successfully configure the cross_az_attach.
To make the changes effective, please perform below steps on management node:
    1. run 'icic-services nova restart'
    2. run 'icic-services cinder restart'
    3. run 'icic-services remote restart --node all'
[root@cic113m ~]#
We need to restart the related services as reminded in the command output, and also we need to logout and re-login to the GUI of Cloud Infrastructure Center.

  1. Preparation steps for deploy a virtual machine
Before you can deploy a virtual machine in Cloud Infrastructure Center, you also need to upload images, create networks, and compute templates (also known as "flavor" ). We will skip the details of these steps for this blog and focus on the storage related parts.
Use case 1: Deploy a virtual machine with SCSI volume as root disk (also known as “Boot From Volume”) 
In this use case, we will deploy a virtual machine with volume as the root disk in the MOP datacenter. This is one use case for persistent storage in Cloud Infrastructure Center, also known as "Boot From Volume".
Cloud Infrastructure Center will filter the available storage template with the availability zone, so that only the storage provider in the same availability zone will be listed.
Steps:
Select the SCSI image uploaded in the "Images" panel and click the "Deploy" button. A deploy page will popup, requesting to insert the name, instance count, description, deploy target, etc., as shown in Figure 9.
In this deploy page, after you select a "Deploy target", the storage template from the same availability zone will automatically be selected, other storage templates will be hidden. This ensures that the virtual machine will use storage provider in the same datacenter with the selected deploy target host.
Figure 9. Deploy a virtual machine with Boot From Volume function

Use case 2: Deploy a virtual machine with local DASD disk and attach a volume to this virtual machine
  1. In this use case, we will deploy a virtual machine to the POK datacenter and then attach a data volume to this virtual machine.
Cloud Infrastructure Center filters the available volume list with the availability zone, so that only the volume in the same datacenter is listed.Select a DASD image uploaded to deploy a virtual machine in POK datacenter
In the deploy page, all availability zones and standalone hosts will be listed in the "Deploy target" list. To deploy a virtual machine in the POK datacenter, we can select either the AZ_POK or POK_host as the target.
Figure 10. Deploy a virtual machine from DASD image in the POK datacenter

  1. Create data volume on storage provider
To show the volume list filter, we create a data volume on the two storage providers from the two datacenters.
In the "Storage" -> "Data Volumes" panel, click the "Create" to create a volume:
Figure 11. Create data volumes on the POKSTOR

Similar, create three data volumes from the MOPSTOR, the resulting volume list is shown in Figure 12.
Figure 12. Data volume list

  1. Attach the data volume to the virtual machine
On the "Virtual Machines" page, select the virtual machine we just deployed on POK_host, which is in the POK datacenter, and click "Attach Volume". A volume list will be popup to select from.
As shown in Figure 13, the Cloud Infrastructure Center filters the volume list, so that only the volumes in the same availability zone are listed. In this example, only the three volumes from POKSTOR are available to be attached to this virtual machine; cross availability zone attach is not allowed.
Figure 13. Attach data volume to virtual machine