Through this blog, I would like to share the approach I have tried to deploy the Cloud Pak for Business Automation (CP4BA) on IBM Cloud VPC infrastructure by leveraging the existing block storage of IBM VPC Cluster as a Dynamic Storage class. Please note that this approach is not officially supported and only for PoC / Demo purposes and tried with CP4BA starter deployment.
Deploying CP4BA by leveraging Block storage as Dynamic Storage on IBM Cloud VPC
When we are deploying the Cloud Pak for Business Automation (CP4BA), it is important to be in-line with the storage considerations similar to other Cloud Paks. For example there are Business automation capabilities which requires storage classes with ReadWriteOnce and ReadWriteMany access modes.
According to the CP4BA documentation, it needs both Dynamic storage and Block Storage.
- Dynamic storage must be supported on the cluster.
- Network File System (NFS) storage or some other shareable storage such as Gluster File System (GlusterFS) is needed if you choose not to use the native OCP storage provisioner. You can use the NFS-Client Provisioner along with an NFS share to meet the dynamic storage requirements.
Let's says we are planning to deploy CP4BA on RedHat OpenShift Cluster which runs on IBM Cloud VPC Infrastructure, where File Storage is not supported / not GA’ed yet ( as of the date this blog published ). So the other option is, leveraging OpenShift Container Storage( OCS ). However OCS itself has its own system requirements, like it needs at least 3 nodes of size 16x64 each. For simple demo environment, this itself become a huge requirement to handle.
This is where I was exploring the options and came across the pattern deployed on Cloud Pak for Integration (CP4I) by leveraging the Rook NFS server.
Since the storage requirements are similar, I have followed the steps with CP4BA and successfully deployed the same.
The idea here is, as mentioned in the document, create a custom storage class in the cluster that supports RWX access mode which is backed up by an RWO storage class. On IBM Cloud, IBM VPC Cluster has default block storage (RWO) attached, which can be leveraged with the help of RookNFS server to create RWX storage classes by following the steps mentioned in the CP4I document.
Here are the steps which can be followed from the document.
- Deploying Rook NFS Operator
- Deploying the Rook NFS server
- Creating the storage class
- Deploy CP4BA and Capabilities
1. Deploying Rook NFS Operator
- Clone NFS Git repository at version 1.7.3:
git clone --single-branch --branch v1.7.3 https://github.com/rook/nfs.git
ii. Navigate to this directory:
cd nfs/cluster/examples/kubernetes/nfs
iii. Open the operator.yaml file and change the Deployment image field
from rook/nfs:v1.7.3 to icr.io/cpopen/cpd/rook-nfs:kz-220512 iv. Log into OCP cluster using the oc login command and with user credentials.
For example:
oc login <openshift_url> -u <username> -p <password> -n <namespace>
v. Apply the CustomResourceDefinitions of the NFS Server to the cluster:
oc apply -f crds.yaml
vi. Create the operator deployment:
oc apply -f operator.yaml
This would create the following
namespace: rook-nfs-system
serviceAccount: rook-nfs-operator
ClusterRole and ClusterRoleBinding for operator rook-nfs-operator
Deploy the operator - rook-nfs-operator in the namespace rook-nfs-system
vii. Verify that the operator is running:
oc get pod -n rook-nfs-system
NAME READY STATUS RESTARTS AGE
rook-nfs-operator-84fff9f699-tv25t 1/1 Running 0 10s
viii. Grant the Rook NFS service account access to the privileged SecurityContextConstraints (SCC) resources:
oc adm policy add-scc-to-user privileged system:serviceaccount:rook-nfs:rook-nfs-server
2. Deploying the Rook NFS server
Once the operator is deployed, deploy Rook NFS server. Follow the below steps, no modifications are required here.
i. Create RBAC objects for the NFS server. To do this, create a file server.xml and copy the below contents and apply
---
apiVersion: v1
kind: Namespace
metadata:
name: rook-nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rook-nfs-server
namespace: rook-nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rook-nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
resourceNames: ["rook-nfs-policy"]
verbs: ["use"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups:
- nfs.rook.io
resources:
- "*"
verbs:
- "*"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rook-nfs-provisioner-runner
subjects:
- kind: ServiceAccount
name: rook-nfs-server
namespace: rook-nfs
roleRef:
kind: ClusterRole
name: rook-nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
Apply the file
oc apply -f server.yaml
This would create the following
Namespace: rook-nfs
Service Account: rook-nfs-server
ClusterRole and ClusterRoleBinding for the rook-nfs-server
ii. Now, create a PersistentVolumeClaim (PVC) with the existing RWO storage class for the NFS server, which should be large enough to support all the future RWX requirements for your CP4BA components which you are planning to deploy. For example if we are trying to deploy just the ODM component from CP4BA, then make sure to refer the storage requirements of the component and make sure to accommodate the size.
Create a file “rook-nfs-pvc-rwo.yaml”
You must replace the value of <rwo-storage-class> with the RWO storage class you intend to use.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pwx-claim
namespace: rook-nfs
spec:
storageClassName: <rwo-storage-class>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
For example, in this yaml, just replace the <rwo-storage-class> with existing rwo class. On IBM Cloud VPC, this could be replaced with ibmc-vpc-block-10iops-tier
Apply the yaml to create the PVC.
oc apply -f rook-nfs-pvc-rwo.yaml
iii. Deploy NFS server by creating the following content to yaml nfs-server.yaml
apiVersion: nfs.rook.io/v1alpha1
kind: NFSServer
metadata:
name: rook-nfs
namespace: rook-nfs
spec:
replicas: 1
exports:
- name: share1
server:
accessMode: ReadWrite
squash: "none"
# A Persistent Volume Claim must be created before creating NFS CRD instance.
persistentVolumeClaim:
claimName: nfs-pwx-claim
# A key/value list of annotations
annotations:
rook: nfs
Apply the yaml to deploy the server
oc apply -f nfs-server.yaml
iv. Verify that the server pod is running:
oc get pods -n rook-nfs
NAME READY STATUS RESTARTS AGE
rook-nfs-0 2/2 Running 0 31s
3. Creating the storage class
Now Create the storage class for the CP4BA deployment which needs the RWX access. In this exercise, I have used the storage class name as: automation-storage which is placed in the below yaml under metadata -> name. You can provide a different name if you want. This is the storage class name which we will be using with CP4BA deployment.
Create a yaml file automation-storage-sc with the following content
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: rook-nfs
name: automation-storage
parameters:
exportName: share1
nfsServerName: rook-nfs
nfsServerNamespace: rook-nfs
provisioner: nfs.rook.io/rook-nfs-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
Apply the yaml
oc apply -f automation-storage-sc.yaml
Now, we have the RWX storage class automation-storage which we can use it further on the CP4BA component deployments which requires the rwx storage class.
You can view the storage class from OCP console -> Storage -> StorageClasses
4. Deploy CP4BA and ODM Component:
Next, Install Cloud Pak for Business Automation (CP4BA). I am not covering the detailed steps of CP4BA deployment because I would like to highlight using the custom storage class in this blog. For detailed instructions on deploying CP4BA and capabilities please refer CP4BA knowledge centre
Once the CP4BA operator is installed, to deploy the capabilities we will need to create the Custom resource using the CP4BA operator instance. This can be either done through OCP console or using the scripts. Refer the steps to do from OCP console from here
When selecting the parameters for the shared configurations, one of the important parameter is storage. Here, for the file-based storage class, chose the automation-storage storage class which is created through the above sections. For block-storage, can go with the default block storage class on IBM VPC cluster which is ibmc-vpc-block-10iops-tier
Here is the screenshot for reference from the custom resource yaml
With that, we can go ahead and deploy the capabilities. For this article I have deployed the ODM component.
Once the deployment is complete, verify that the persistent volume claims (pvcs) are bound to the respective storage classes as we see in the below screenshot. The screenshot displays on pvcs bound to automation-storage class, however there are other pvcs bound to block storage class as well.
Once the installation is completed and all the pods are running successfully, verify the capability which is installed. Here in this case I have installed ODM component and able to access the console as shown below.
Thanks for reading. Hope this blog provides a different approach with respect to storage options when deploying CP4BA starter pattern.
References:
Cloud Pak for Integration - Deploying the Platform UI with RWO storage
#Storage#NFS#IBMCloudPrivate#RedHatOpenShift