1) IBM Cloud Pak for Integration on OCP 3.11
2) Persistent Storage.
3) oc client CLI is installed on client machine. If not installed already, follow the instructions at
https://docs.openshift.com/online/cli_reference/get_started_cli.html
4) Default namespace 'ace' has been used here for example. If you are deploying in some other namespace, make sure that security policies and secrets are configured appropriately. The predefined SecurityContextConstraints ibm-anyuid-scc has been verified for the supplied helm-chart. If you are deploying in a custom namespace, run below command for that namespace:
oc adm policy add-scc-to-group ibm-anyuid-scc system:serviceaccounts:<namespace>
Image Pull Secret: If you are using a private Docker registry (including an ICP Docker registry), an image pull secret needs to be created before installing the chart. Here we will use OCP registry, so will create image pull secret. A default image pull secret gets created in each namespace with the name like 'default-dockercfg-<xxxxx>' to pull the images from the same namespace. So if your ImageStreams are in same namespace, you can use this secret created by default. If you are pulling from IBM container registry, use ibm entitlement key secret.
Note: This recipe uses NFS for peristent storage. If you want to use dynamic provisioing of storage, skip the steps 1 and 2 and supply the name of 'Storage Class' and check 'Use dynamic provisioning' check-box. Note that the storage should support access mode RWX.
Step-by-step
-
Create directory for ACE dashboard in NFS mount
Login to NFS server and go to NFS mount. Create a directory where ACE Dashboard configuration/data will be persisted.
I have created the directory ‘ace-dashboard’ here.
-
Create Persistent Volume
Login to Openshift cluster.
oc login <Openshift cluster url> -u <username> -p <password>
Below is a sample json file to create PV for ACE dashboard. Save it in a file, say named ‘pvcreate.json’. Note that accessMode for ACE dashboad should be ‘ReadWriteMany’ (RWX).
Now run below command to create persistent volume using this definition.
oc create -f pvcreate.json
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "ace-dashboard"
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"nfs": {
"server": "10.41.16.69",
"path": "/nfs/integration/ace-dashboard"
},
"accessModes": [
"ReadWriteMany"
],
"persistentVolumeReclaimPolicy": "Retain"
}
}
You may also create Persistent Volume Claim; however here we will leave it on Kubernetes to create PVC.
-
Create ACE Dashboard instance from Platform Navigator
Login to ICP4I Platform navigator.
Click on ‘Add new instance’ in ‘App Connect’ and click ‘continue’. Fill the appropriate values in respective fields.
Specify the hostname of the ingress proxy to be configured. For example in this case “icp-proxy.9.204.169.137.nip.io”
Uncheck ‘Use dynamic provisioning’ as we are using NFS and leave Persistent Volume Claim blank. Click on ‘Install’.
If you are using dynamic provisioning, you would have skipped step 1 and 2. In that case, the check box for ‘Use dynamic provisioning’ should be checked and name of storage class should be supplied in ‘Persistent storage class’. Note that the storage should support access mode RWX.
It will take few minutes to install and configure ACE Dashboard. Go to Platform navigator and you should be able to see ACE Dashboard link there.
Click on the dashboard link; it will navigate you to the page from where you can deploy BAR files and create integration servers.
Conclusion
-
In this recipe, we learnt how to deploy ACE Dashboard on ICP4I using persistent storage.
To learn how to deploy a BAR file and create an IntegrationServer, navigate to below recipe:
Deploying ACE IntegrationServer on IBM Cloud Pak for Integration