z/OS Container Extensions (zCX) - Group home

Adding persistent local storage to zCX for OpenShift Cluster using Local Storage Operator (LSO)

  

Introduction:


zCX for OpenShift cluster provides capability to add linear VSAM data sets as local disks either using the ocp_provision.xml zCX z/OSMF workflow or using the ocp_add_local_storage_disks.xml  zCX z/OSMF workflow after the cluster is up and running. These local storage disks can then be used to create local persistent volumes(PV’s) that can be used by the application workloads that need persistent storage. Remember that application workloads using local storage have node affinity since these local storage disks are not shared between  zCX OpenShift cluster nodes.

Step-by-step guide:

The linear VSAM data sets can be added to the cluster nodes after the cluster is brought up. If you are planning to run 3 applications on a node that required 10 Gi, 10 Gi and 40 Gi disk space,  then you should add two disks for 10 Gi and one disk for 40 Gi. You cannot add a single disk of 60 Gi and use it to create multiple PVs.

The disks added to your zCX for OCP appliances can be verified by looking at your start.json file. In the following example there are seven disks that show up with purpose ‘data’ which were added using  ocp_provision.xml workflow and ocp_add_local_storage_disks.xml workflow respectively.

Fig. 1

In order to view the added storage from inside the cluster node,

Login to the node terminal of the zCX for OCP appliance to which you added local disks.

Issue command: chroot /host

Run command: lsblk -l

Figure. 1

   Figure 1.

This command shows the root disk ( /dev/vda, /dev/vda3, /dev/vda4) and the rest of the disks are data disks. The disk sizes are shown by this command. The config and the logs disks seen in start.json do not appear here.

Note that the mount points /dev/vdb, /dev/vdc etc can change upon restart of nodes or restart of the cluster. For this reason, this path cannot be used when setting up local PV’s.

The disks can be viewed using the by-path id which is static across reboots.

Figure 2

    Figure 2

Using Figure 1 and Figure 2 you can determine the disk sizes for each by-path  id such as ccw-0.0.0003 which is currently mounted at /dev/vdb is size 2 GB.

Note that we will use above by-path ids for data disks ccw-0.0.0003 and onward for use with the local storage operator.

Example of setting up a PV using local storage: ( Note that the node affinity needs to be specified.)

pv.yaml

  pv.yaml

Configuring the Local Storage Operator:

 

Follow instructions on this page to install the Local Storage Operator: https://docs.openshift.com/container-platform/4.11/storage/persistent_storage/persistent-storage-local.html

Note that while setting up the local volume resource, for the 'devicePaths' use the by-path ids as shown in below example.

local-volume.yaml   local-volume.yaml

Creating the local volume resource would create PV if the disks referenced by the by-path ids are found on the stated nodes. It would be good to note that PV’s created using this method show incorrect annotation. For more information refer to this Bugzilla defect: https://bugzilla.redhat.com/show_bug.cgi?id=2115728

Summary:

Workloads requiring local storage can be run on OpenShift Container Platform 4.11  by either creating PV's or by using Local Storage Operator LocalVolume resource once the linear VSAM datasets are added to the nodes.

References:
  1. IBM zCX Foundation for Red Hat Foundation. https://www.ibm.com/products/zcx-openshift
  2. Red Hat OpenShift. https://www.Red Hat.com/en/technologies/cloud-computing/openshift
  3. zCX Foundation for Red Hat OpenShift content solution. https://www.ibm.com/support/z-content-solutions/zcx-openshift