Platform and Cloud Pak

Tips for Workflow Authoring Enterprise Pattern v20.0.3 deployment on OpenShift

By PING MEI posted Tue December 29, 2020 04:59 AM


A Workflow Authoring environment is the single authoring and development environment for the IBM Cloud Pak for Automation platform, where you can go to author business services, applications, and digital workers. There are some tips for your reference during deployment.


  • Dynamic provisioning is recommended to use for Workflow Authoring Enterprise Pattern deployment. By default, dynamic provisioning is set as true in the custom resource (CR) file, just one step for you is to update storage class name of dynamic provisioning in your CR.

           Use the command ‘oc get storageclass’ to check whether dynamic storage is installed on your env. If no, you can setup your NFS client provisioner by following After installed, you can see the following storage class on your env.

  • To use Oracle, or PostgreSQL, or use your own JDBC driver for Db2, you must create a persistent volume (PV) and persistent volume claim (PVC) to store the driver files. But one more easy way is to utilize operator’s PV and PVC that are already created for deploying the operator, you need not create your own PV and PVC.
    • Get the running operator pod by using the following command

                        oc get pod|grep ibm-cp4a-operator|grep Running

    • Copy JDBC drivers by running the following command

                        oc cp /tmp/jdbc ${operator_podname}:/opt/ansible/share


  • For PostgreSQL, the name of database is case sensitive, so the name in custom resource must strictly match with the one created in the database

            For example, you use the following command to create your database in PostgreSQL, the name of database is uppercase.

           Then you must input “TESTDB1” as database name in your custom resource file, not “testdb1”, or else you will get the error that the database defined cannot be found.

  • To verify whether deployment is ready, you can first check the jobs related deployment, the status of them should be completions. For some scenarios, like upgrade, may you will see the failed jobs,  especially for database job, because when database job starts, the upgrade is not completed yet,  you can wait until the upgrade is completed without error, then you can check jobs again. If there is at least one completed job like the picture below, you can ignore the failed job.   

            The following picture is an example.