BPM, Workflow, and Case

BPM, Workflow, and Case

Come for answers. Stay for best practices. All we’re missing is you.

 View Only

Inflight Case Instance Migration Strategies from IBM Business Automation Workflow (BAW Case) 25.0.0 On-Premise to Cloud Pak for Business Automation 25.0.0

By APARNA BULUSU posted 2 days ago

  

Overview 

IBM provides two supported approaches to move case instances from an on-premise IBM Business Automation Workflow (BAW Case) 25.0.0 environment to Cloud Pak for Business Automation 25.0.0 (CP4BA 25.0.0): 
 
1. Re-use CPE & ICN Databases 
2. Backup and Restore CPE & ICN Databases 
 
Both methods ensure that existing case instances are retained and usable post-migration. However, each has unique considerations regarding setup, downtime, and post-migration flexibility. 
 

Method 1: Re-use CPE & ICN Databases 

Description: 

In this method, the existing on-premises CPE and ICN databases are directly connected to the CP4BA runtime. 

Pros:

- Uses the same LDAP as on-prem 
- Easy setup; no backup/restore steps needed 
- No new databases needed 
- Case instance migration is successful

Cons:

- After migration, the on-prem application cannot be used 
- On-prem and CP4BA environments cannot run in parallel 
 

Method 2: Backup and Restore CPE & ICN Databases 

Description: 

This method involves taking a backup of the on-prem CPE and ICN databases and restoring them to newly provisioned databases in the CP4BA environment. 

Pros:

- Uses the same LDAP as on-prem 
- On-prem system remains intact and usable 
- On-prem and CP4BA can run side-by-side 
- Case instance migration is successful 
 

Cons:

- Involves extra steps for backup/restore 
- Requires provisioning of new databases 

- Instances created on on-premise post database backup cannot be restored to the containers 
 

Comparison Table 

Criteria 

Re-use CPE & ICN Databases 

Backup and Restore CPE & ICN Databases 

LDAP Reuse 

✅ Yes 

✅ Yes 

Setup Complexity 

✅ Easy 

❌ More involved 

New DB Required 

❌ No 

✅ Yes 

Downtime Required 

✅ Yes 

❌ No 

On-Prem Application Usable Post-Migration 

❌ No 

✅ Yes 

Parallel Run (On-Prem + CP4BA) 

❌ No 

✅ Yes 

Rollback Possibility 

❌ No 

✅ Yes 

Case Instances Migrated 

✅ Yes 

✅ Yes 

Recommendation 

Recommended Method: Backup and Restore 

This method is more suitable for most enterprise environments, especially when: 
- The business needs to test CP4BA before decommissioning on-prem. 
- There's a need to minimize risk with rollback options. 
- The customer prefers zero disruption to existing workloads. 
- The organization plans a gradual cut-over to CP4BA. 
 
Although it requires additional setup and database resources, the operational flexibility, fallback capability, and coexistence benefits outweigh the initial complexity. 
 

The below steps focuses on the Backup and Restore method — a clean, infrastructure-independent way to move your runtime environment with the inflight cases intact.

Prerequisites


- Migration is supported from an on-prem IBM BAW environment with external IBM Content Platform Engine and External IBM Content Navigator.
- LDAP configuration is consistent across source on-premise and target cloud pak for business automation environments.
- The database version (DB2, Oracle, or MSSQL) is the same in both on-premise and cloud pak for business automation.

Step 1: Back Up On-Prem Databases

For DB2:


Run these as db2inst1 on the on-premise environment:
db2 backup db GCDDB to /home/db2inst1/backup/GCDDB
db2 backup db DOSDB to /home/db2inst1/backup/DOSDB
db2 backup db TOSDB to /home/db2inst1/backup/TOSDB
db2 backup db DOCSDB to /home/db2inst1/backup/DOCSDB
db2 backup db ICNDB to /home/db2inst1/backup/ICNDB
db2 backup db BPMDB to /home/db2inst1/backup/BPMDB

For Oracle:


# BAW Database
expdp system/Password1@wflworc1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com full=y directory=DATA_PUMP_DIR dumpfile=full_backup.dmp logfile=full_backup.log parallel=4

# ICN Database
expdp system/Password1@icnorc1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com full=y directory=DATA_PUMP_DIR dumpfile=icnorc_full_backup.dmp logfile=icnorc_full_backup.log parallel=4

# ICN Tablespaces
expdp system/Password1@icnorc1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com tablespaces=ICNDB directory=DATA_PUMP_DIR dumpfile=icntablespaces_backup.dmp logfile=tablespaces_backup.log parallel=4

# CPE Database
expdp system/Password1@fncpeorc1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com full=y directory=DATA_PUMP_DIR dumpfile=fncpeorc_full_backup.dmp logfile=fncpeorc_full_backup.log parallel=4

For Microsoft SQL Server (MSSQL):


-- Backup
BACKUP DATABASE [gcdbd] TO DISK = 'D:\SQLBackups\gcdbd.bak' WITH INIT, COMPRESSION;
-- Repeat for all CPE, Navigator, and BPM databases

Step 2: Restore the Databases on CP4BA’s Database Server

DB2 Restore:


db2 restore db GCDDB from /home/db2inst1/restore/GCDDB
db2 restore db DOSDB from /home/db2inst1/restore/DOSDB
db2 restore db TOSDB from /home/db2inst1/restore/TOSDB
db2 restore db DOCSDB from /home/db2inst1/restore/DOCSDB
db2 restore db ICNDB from /home/db2inst1/restore/ICNDB
db2 restore db BPMDB from /home/db2inst1/restore/BPMDB

Oracle Pre-Restore Setup:


Create the users on Container's Oracle Database:
- Use create_user.sql from BAW Onpremise DB scripts to create the BAW user in the container's database
- Create the users as used on-premise oracle database for GCD, DOS, TOS  object stores 
- Run the ICN Database script used in IBM Navigator setup and oracle_one_script.sql from Navigator's configure directory on on-premise to create users in the container's database

Ensure correct DATA_PUMP_DIR path:
SELECT directory_name, directory_path FROM dba_directories WHERE directory_name = 'DATA_PUMP_DIR';
CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/home/oracle/backup/';
GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO system;

Oracle Restore:


# Restore BAW
impdp system/Password1@cp4baorcdb1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com full=y directory=DATA_PUMP_DIR dumpfile=full_backup.dmp logfile=full_restore.log parallel=4 transform=oid:n

# Restore CPE
impdp system/Password1@cp4baorcdb1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com full=y directory=DATA_PUMP_DIR dumpfile=fncpeorc_full_backup.dmp logfile=fncpe_restore.log parallel=4 transform=oid:n

# Restore ICN
impdp system/Password1@cp4baorcdb1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com tablespaces=ICNDB directory=DATA_PUMP_DIR dumpfile=icntablespaces_backup.dmp logfile=tablespaces_restore.log parallel=4
impdp system/Password1@cp4baorcdb1.fyre.ibm.com:1521/orclpdb.fyre.ibm.com full=y directory=DATA_PUMP_DIR dumpfile=icnorc_full_backup.dmp logfile=icn_restore.log parallel=4 transform=oid:n

MSSQL Restore:


-- Restore GCD Database
RESTORE DATABASE [gcdbd]
FROM DISK = 'D:\SQLRestore\gcdbd.bak'
WITH MOVE 'gcdbd_Data' TO 'D:\SQLData\gcdbd.mdf',
     MOVE 'gcdbd_Log' TO 'D:\SQLData\gcdbd_log.ldf',
     REPLACE;

Similarly execute for other object stores and navigator database.


-- Restore BPM Database
USE MASTER
GO
CREATE LOGIN sqluser WITH PASSWORD='@DB_PASSWD@'

RESTORE DATABASE [BPMDB] FROM DISK = 'D:\SQLRestore\BPMDB.bak'

USE BPMDB;
GO
ALTER USER sqluser WITH DEFAULT_SCHEMA=sqluser
EXEC sp_addrolemember N'SqlJDBCXAUser', N'sqluser';
ALTER AUTHORIZATION on schema::sqluser TO [sqluser];
EXEC sp_addrolemember 'db_ddladmin', sqluser;
EXEC sp_addrolemember 'db_datareader', sqluser;
EXEC sp_addrolemember 'db_datawriter', sqluser;
ALTER LOGIN sqluser ENABLE;

Before starting the migration:

1.     Fill the migration planning sheet https://www.ibm.com/support/pages/system/files/inline-files/MigrationPlanningSheet_BAW23x%20(2).pdf

2.     Disable custom plugins on the navigator, they can be reconfigured post migration.

Prepare the Cluster

1.     To prepare to install Cloud Pak for Business Automation, follow the instructions in Installing a CP4BA multi-pattern production deployment.

2.     Log in to the cluster with the cluster administrator that you used in Option 1: Preparing your cluster for an online deployment or a non-administrator user who has access to the project.

oc login https://<cluster-ip>:<port> -u <cluster-admin> -p <password>

where

o   <cluster-ip> is the IP address of the cluster

o   <port> is the port number of the cluster

o   <password> is the password for your <cluster-admin> user

3.     View the list of projects in your cluster to see the target project before you run the deployment script:

oc get projects

 If you used the All Namespaces option to install the Cloud Pak operator, then you must have another project in addition to openshift-operators in the cluster before you create the deployment. Change the scope to the project that you created for your deployment.

oc project <project_name>

The specified project is used in all later operations that affect project-scoped content.

4.     Optional: If you need to, download the cert-kubernetes repository to an amd64/x86, a Linux on Z, or a Linux on Power based VM/machine. For more information about downloading cert-kubernetes, see Option 1: Preparing your cluster for an online deployment.

Generating the CR

After you prepare the cluster, you can generate and run the CR file. The CR file acts as a template of what you will install, and can be customized according to the components that the operator supports for installation.

Procedure

1.     In the /cert-kubernetes/cert-kubernetes/scripts folder, run the ./case-migrate-cp4a-prerequisites.sh script. Running the prerequisites script gives you the instructions to follow.

2.     Run the script in property mode to generate the properties files:

./case-migrate-cp4a-prerequisites.sh -m property

3.  Migration from BAW on-prem to Cloud Pak for Business automation is only supported for runtime , hence select Workflow Runtime.

Select the Lightweight Directory Access Protocol (LDAP) that you use in the on-prem environment.

5.  Enter the storage class name or names you want to use.

For more information, see Storage considerations.

6.  Select the deployment profile: small, medium, or large.

For more information, see System requirements.

7.  Select the database type.

    Note: PostgreSQL is not supported for moving IBM BAW on prem to Cloud Pak for Business Automation

8.  Enter an alias name for the database server.

9.  Enter the name of an existing project (namespace) where you want to deploy Cloud Pak for Business Automation.

10.  Answer the question asking whether you want to restrict access to the Cloud Pak for Business Automation deployment.

11.  View your results.

The database and LDAP property files for Cloud Pak for Business Automation are created, followed by the property file for each database name and user, followed by the Cloud Pak for Business Automation property files.

In cert-kubernetes/scripts/cp4ba-prerequisites/propertyfile, the following files are created:

o   cp4ba_case_migration.property

o   cp4ba_db_name_user.property

o   cp4ba_db_server.property

o   cp4ba_LDAP.property

o   cp4ba_user_profile.property

12.  To configure multiple Target Object Stores, enter a value greater than 1 for the number of Target Object Stores.

13.  Update the property files.

a. Update the cp4ba_case_migration.property file with the values from the on-prem Case Manager server and the multiple Target Object Store information from the on-prem environment

b. Update the cp4ba_db_name_user.property file with the usernames and passwords of the GCD database, DOCS database, DOS database, TOS database, IBM Content Navigator (ICN) database,  Business Automation Workflow (BAW) database (for runtime) with the same values as on on-prem

c. Update the cp4ba_db_server.property file with the same database details as in the on-prem IBM Business Automation Workflow

d. Update the cp4ba_LDAP.property file with the same LDAP details as in the on-prem IBM Business Automation Workflow.

e. Update the cp4ba_user_profile.property file with the license, admin user (as in the on-prem Case Manager), and keystore passwords.

14.  Run the ./case-migrate-cp4a-prerequisites.sh file in generate mode using the following command:

./case-migrate-cp4a-prerequisites.sh -m generate

This command generates all the database SQL statements file required by the Cloud Pak for Business Automation deployment based on the property files.

No need to run the db scripts as the databases are already existing.

15.  Apply the secrets by running the create_secret.sh file in the ./cp4ba-prerequisites folder.

create_secret.sh

The required secrets for LDAP, FileNet, Navigator, and Workflow are created.

16.  To validate the configuration before you deploy, run the following command:

./case-migrate-cp4a-prerequistes.sh -m  validate

This command checks that everything has been created: Slow/Medium/Fast/Block storage classes, required Kubernetes secrets, LDAP connection, and database connections for all required databases.

17.  Generate the CR. From the /scripts folder, run the ./case-migrate-cp4a-deployment.sh command.

Answer the on-screen prompts and check the input as you go.

The CR file /cert-kubernetes-master/scripts/generated-cr/ibm_cp4a_cr_final.yaml is generated.

Deploying the Custom Resource file

After you successfully generate the Custom Resource file, you can deploy Cloud Pak for Business Automation. The Custom Resource acts as a template of what you will install.

1.     Validate your Custom Resource (CR) file before you apply it or save it in the YAML view. It is likely that you edited the file multiple times, and possibly introduced errors or missed values during your customizations. For more information, see Validating the YAML in your custom resource file.

2.     Apply the upgraded CR to the operator. For more information, see Applying the updated custom resource.


Once when the CR is deployed, you can now access the on-prem inflight cases on the Cloud pak for Business Automation, and modernize them by including the latest features of Case in the migrated solutions.

0 comments
14 views

Permalink