Authors: Dhanesh M, Nusaiba K K
Introduction
In today’s increasingly distributed and hybrid cloud environments, organizations are expanding across regions and data centers to meet performance, regulatory, and availability requirements. With IBM Cloud Pak for Business Automation (CP4BA), enterprises can now architect their automation solutions to span across geographies, ensuring localized data access, reduced latency, and operational resilience.
One powerful capability that enables this is the Geo-Distributed Content Platform Engine (CPE) deployment. By leveraging a geographically dispersed FileNet P8 domain, organizations can deploy CP4BA’s content services in multiple locations while maintaining a unified and consistent enterprise content repository.
In this blog, we will showcase how to perform a geo-distributed deployment in CP4BA, along with the necessary configurations. Whether you're aiming to enhance system redundancy or optimize user experience across multiple regions, this guide will walk you through designing and implementing a scalable, resilient content services platform using CP4BA.
Understanding a geo-distributed fileNet P8 domain
Before diving into the deployment steps, let's understand the concept of a geographically dispersed FileNet P8 domain, which forms the foundation of CP4BA's geo-distributed deployment model.
In simple terms, a geo-distributed FileNet P8 domain refers to a single FileNet domain accessed by multiple deployments hosted across different geographical regions. These deployments operate in an active-active configuration and are connected via a network, i.e., either a local area network (LAN) or a wide area network (WAN).
Key Concepts and Components
-
Sites: Sites represents a geographic location where resources are connected via a LAN. Within a FileNet P8 domain, each site is defined by a unique name and contains resources such as:
-
Object stores
-
Index areas
-
Advanced storage areas
-
Virtual servers
-
Virtual Servers and Namespaces:
-
Each CP4BA deployment is registered as a virtual server in the FileNet P8 domain.
-
The virtual server name is derived from the metadata.name of the custom resource (CR) used during deployment.
-
All deployments participating in the geo-distributed FileNet domain must use the same namespace name, ensuring domain consistency.
-
However, each CR must still have a unique metadata.name to differentiate the deployments within the domain.
Essential Features for a Geo-Distributed Setup
To enable geo-distributed object stores and content services, the following capabilities must be configured:
-
Directory Service Provider (LDAP)
-
A single LDAP provider must be accessible by all deployments across regions.
-
This allows Content Platform Engine (CPE) instances to share user credentials and authenticate securely using LTPA tokens.
-
Ensure the CR parameter sc_skip_ldap_config is set to false to enforce LDAP setup.
-
Request Forwarding
-
Advanced Storage Areas with Content Replication
-
CPE supports replication of content to multiple storage devices and across instances of the same type.
-
There is no need to mount file systems across WANs, simplifying storage architecture and reducing risk.
-
Cross-Site CPE Server Communication
-
Allows advanced storage areas to replicate across sites without requiring file system directories to be mounted between regions.
-
This enables content synchronization across regions in a clean, secure, and performant manner.
Preparing the first OCP cluster for geo-distributed CPE deployment
To begin the geo-distributed deployment of CP4BA, we start by performing the initial setup on the first OpenShift (OCP) cluster. This setup is similar to a standard production deployment up to the point of generating the custom resource (CR) file.
In this phase, the following steps are performed:
- Run the cluster admin script (
cp4a-clusteradmin-setup.sh)
This script sets up the foundational permissions and configurations needed for CP4BA.
It installs the required CP4BA operators into the specified namespace, enabling the platform to manage the lifecycle of its components across the cluster.
- Execute the prerequisite Script (
cp4a-prerequisites.sh)
The cp4a-prerequisites.sh script prepares the environment for deployment by generating necessary property files and Kubernetes secrets—especially for LDAP and database integration.
Instead of manually creating these secrets, the script streamlines the process by generating YAML templates that can be applied directly.
The script supports the following modes:
a. property – Generates the required property files
Review and update these files with your LDAP configuration, user attributes, and database connection details.
b. generate – Uses the updated property files to generate YAML templates for secrets and database script files to create databases.
c. validate – Validates that the generated secrets are properly configured and ready to use
- Generate the Custom Resource (CR) File
Use the deployment script (cp4a-deployment.sh) to generate the CR YAML file that defines the specifics for your CP4BA deployment, including the capabilities and their components that you selected.
Note: These steps follow the standard CP4BA production deployment process. If you're new to this or need a detailed walkthrough, refer to the CP4BA production deployment doc or our blog on CP4BA deployment: https://community.ibm.com/community/user/blogs/dheeraj-krishan/2025/06/05/ibm-cloud-pak-for-business-automation-fresh-produc
-
Modify and apply the CR for the first cluster deployment
Once the Custom Resource (CR) is generated, the next step is to customize it for a geo-distributed Content Platform Engine (CPE) deployment. This involves setting specific configuration parameters required to support LDAP-based authentication and proper initialization of content services.
The following steps outline the essential modifications you need to make before applying the CR in the first OpenShift cluster (Initial Site).
a. Enable LDAP for Cross-Site Authentication
Before applying the CR, add the following parameter under the shared_configuration section:
shared_configuration:
sc_skip_ldap_config: false
Geo-distributed deployments rely on LTPA (Lightweight Third-Party Authentication) tokens for cross-site authentication. LTPA only works with LDAP directory providers and not SCIM servers.
Therefore, both deployments must use LDAP as the CE connection in ACCE. Setting sc_skip_ldap_config: false ensures the deployment is fully configured to support LDAP based identity management.
b. Set Initialization and Verification Parameters to true
To ensure that the content services are initialized and verified correctly during deployment, add or update the following parameters in the CR:
shared_configuration:
sc_content_initialization: true
sc_content_verification: true
These parameters are especially important for the first (initial) deployment site, as it will:
-
Create the domain and required object stores
-
Initialize index areas and advanced storage areas
-
Verify the integrity of the deployed content services
Note: If you have a custom CA (Example: custom-root-ca-secret) for Content Platform Engine (CPE), the secret (custom-root-ca-secret) that contains the custom CA must be specified in the Custom Resource(CR):
ecm_configuration:
fncm_auth_ca_secret_name: custom-root-ca-secret
The same secret needs to be added to the truststore:
trusted_certificate_list:
- custom-root-ca-secret
c. Apply the Custom Resource(CR)
Once the properties are correctly added and validated, apply the CR to your OpenShift cluster:
oc apply -f ibm_content_cr_final.yaml
This will initiate the CP4BA capabilities in the first cluster, including setting up the FileNet domain, configuring LDAP, and initializing all the necessary resources for content services. Monitor the deployment via oc get pods and the CP4BA operator logs to ensure that all components are brought up successful.
Setting up the second OCP cluster for geo-distributed CPE deployment
After successfully deploying CP4BA in the first cluster, the next step is to prepare the second OpenShift cluster to join the same geo-distributed FileNet P8 domain. This involves exporting essential secrets from the initial deployment and importing them into the second cluster, while also ensuring configuration consistency across sites.
Below are the key steps involved:
- Export required secrets from the first deployment
Export the following secrets from the initial deployment namespace (e.g., cp4ba). These include LTPA keys, OIDC credentials, and root certificates required for trust and authentication between clusters.
oc get secret content-ecm-ltpa -n cp4ba -o yaml >> ecm-ltpa-export.yaml
oc get secret icp4a-root-ca -n cp4ba -o yaml >> icp4a-root-ca.yaml
oc get secret content-cpe-oidc-secret -n cp4ba -o yaml >> cpe-oidc.yaml
oc project cp4ba
oc get secret iaf-system-automationui-aui-zen-ca -o template --template='{{ index .data "tls.crt" }}' | base64 --decode > zenRootCA.cert
These secrets are critical for OIDC authentication, LTPA token trust, and UI certificate validation between clusters.
- Clean up the exported YAML files
Before importing the secrets into the second cluster, remove the following metadata fields from each of the YAML files to avoid conflicts:
-
ownerReferences
-
resourceVersion
-
uid
-
creationTimestamp
This step ensures that Kubernetes treats the secrets as new resources in the second cluster.
Files to clean:
-
ecm-ltpa-export.yaml
-
icp4a-root-ca.yaml
-
cpe-oidc.yaml
- Update the metadata name for the second deployment
Make sure the metadata.name used in the second deployment is unique and different from the initial one. Update this new deployment name in:
-
ecm-ltpa-export.yaml
-
cpe-oidc.yaml
Each deployment must register as a unique virtual server in the FileNet P8 domain. Using a different metadata.name ensures correct separation between sites.
- Use the same namespace in the second cluster.
The namespace used in the second cluster must match the one used in the first deployment.
oc new-project cp4ba
The namespace consistency ensures all deployments align under the same logical grouping in the FileNet P8 domain.
- Import secrets into the second cluster
Apply the cleaned and updated secrets into the same namespace (cp4ba) in the second cluster.
oc apply -f ecm-ltpa-export.yaml -n cp4ba
oc apply -f icp4a-root-ca.yaml -n cp4ba
oc apply -f cpe-oidc.yaml -n cp4ba
These secrets ensure the second cluster can authenticate with the shared LDAP, reuse the same LTPA keys, and connect securely via OIDC.
-
Import (Zen) certificate
The Zen components use their own TLS root certificate. These need to be imported under a different name to avoid collisions with auto-generated secrets.
kubectl create secret generic iaf-system-automationui-aui-zen-ca-external1 --from-file=tls.crt=zenRootCA.cert
These secrets are added to the trust store of the second cluster to support secure communication and cross-site access via the Zen.
-
Run the cluster admin script and install the CP4BA operators
As in the first cluster, begin by running the cp4a-clusteradmin-setup.sh script to assign necessary permissions and install CP4BA operators in the target namespace.
These steps are identical to Step 1 from the Preparing the First OCP Cluster for Geo-Distributed Deployment section.
-
Generate the property files using the prerequisite script
Run the cp4a-prerequisites.sh script in -m property mode to generate the required property files.
./cp4a-prerequisites.sh -m property
This step is same as Step 2 in the first cluster deployment. It generates base property files to configure LDAP and database connections.
-
Reuse the same LDAP and core databases
In a geo-distributed deployment, all participating clusters must connect to the same LDAP and use the same GCD (Global Configuration Database) and Object Store (OS1).
Consider you have used the database names FNOSGCD for GCDDB and FNOSOS1 for OS1DB in the first deployment. Then, in your cp4ba_db_name_user.property file, the entries should look like:
GCDDB=FNOSGCD
OS1DB=FNOSOS1
Keeping GCDDB and OS1DB names consistent ensures both deployments register under the same FileNet domain and share object store definitions.
-
Provide a unique ICNDB name
Since each site hosts its own instance of the IBM Content Navigator (ICN), provide a different name for the ICNDB in the same property file.
ICNDB=FNOSICN2 # Example: different from FNOSICN1 used in cluster one
ICNDB should be unique per site to ensure independent workspace and configuration per Navigator instance.
-
Create only the new ICNDB database
Use the ICNDB SQL script to create only the ICNDB database in the shared database server. Do not recreate GCD or OS1, as they already exist from the first deployment.
This is a key geo-distribution rule: GCDDB and OS1DB must not be duplicated or recreated. ICNDB is site-specific.
-
Create the required secrets
Run the create_secret.sh script to create all required Kubernetes secrets based on the generated and updated property files.
./create_secret.sh
Secrets for LDAP, DB connections, and certificates are now scoped to the second deployment and use the same credentials as the initial site.
-
Generate the CR file
Finally, generate the CR by running the cp4a-deployment.sh script, just as done in the first cluster.
./cp4a-deployment.sh
This step is same as Step 3 from the first deployment. It creates a base CR YAML, which will then be edited to support geo-distributed configurations.
- Editing and applying the CR in the second cluster
With the secrets imported and the base Custom Resource (CR) generated, the next step is to update the CR file to align with geo-distributed deployment requirements. These changes ensure that the second cluster joins the same FileNet P8 domain without reinitializing shared resources.
Follow the steps below to modify and apply the CR file:
a. Set a unique metadata name
Update the metadata.name field in the CR to be different from the first deployment. This name must match the values already set in the ecm-ltpa-export.yaml and cpe-oidc.yaml files.
metadata:
name: contentgeo # Example: use 'contentgeo' instead of 'content'
Each deployment must have a unique virtual server name in the FileNet domain
b. Enable LDAP configuration
Ensure the following property is set under shared_configuration:
shared_configuration:
sc_skip_ldap_config: false
This ensures the deployment uses the shared LDAP directory, which is essential for LTPA token exchange across clusters
c. Disable Initialization and Verification
Since initialization and verification were already done in the first cluster, set the following values to false:
shared_configuration:
sc_content_initialization: false
sc_content_verification: false
This prevents the second cluster from attempting to recreate object stores or reinitialize the domain, avoiding conflicts.
d. Add external certificates to the trust store
Include the imported Zen and Common Services certificates (created with custom names to avoid naming conflicts) in the trusted_certificate_list section:
trusted_certificate_list:
- iaf-system-automationui-aui-zen-ca-external1
These entries ensure that the second deployment trusts the Common UI (Zen) and Common Services components issued from the first cluster.
Note: If you have a custom CA (Example: custom-root-ca-secret) for Content Platform Engine (CPE), the secret (custom-root-ca-secret) that contains the custom CA must be exported and imported to the second deployment and also add it to the trust store.
e. Apply the Custom Resource file
Once the CR file is updated, apply it to start the deployment:
oc apply -f ibm_content_cr_final.yaml
This will initiate the deployment of CP4BA capabilities in the second cluster and join it to the existing FileNet domain
-
Monitor the Deployment
Use standard OpenShift monitoring commands to verify that all pods and services are deployed successfully in the second cluster:
oc get pods -n cp4ba
Once complete, the second site becomes part of the geo-distributed FileNet P8 domain, fully integrated with shared object stores and LDAP-backed security.
Post-deployment configuration for geo-distributed environments
Once the second cluster is successfully deployed and connected to the shared FileNet P8 domain, additional configurations are required to ensure proper inter-site communication, request forwarding, and repository access.
In this session, we will walk through the following post-deployment tasks:
- Configure request forwarding between sites
- Configure Content Platform Engine (CPE) server communication
- Set up Navigator repositories for the second site
Post-Deployment Preparation:
After successful deployment in the second cluster:
Access ACCE
-
Navigate to the second cluster’s OpenShift Console.
-
Go to ConfigMaps → access-info, and find the URL for Content Platform Engine Administration Console (ACCE).
-
Login to ACCE using administrator credentials.
Configure Request Forwarding Between Sites:
Request forwarding allows a site to redirect data operations to the geographically closest server, improving performance by minimizing WAN communication.
Create a New Site in ACCE:
-
In ACCE, go to: P8 Domain → Global Configuration → Administration → Sites
-
Right-click Sites and choose New Site. Name it appropriately (e.g., Site-2) and finish the creation wizard.
-
Now move the second deployment's virtual server to this new site:
-
-
Right-click the Initial Site
-
Choose Move Site Components
-
Select the virtual server with the metadata.name from the second deployment (e.g., contentgeo)
-
Choose Site-2 as the destination
Enable Request Forwarding for Both Sites:
In ACCE, go to:P8 Domain → Global Configuration → Administration → Sites
- Click on Initial Site, go to the General tab, and set:
Can forward requests → True
Can accept forwarded requests → True
- Repeat the same for Site-2.
This allows each site to send and receive forwarded requests based on proximity to the object store database.
Configure Request Forwarding Endpoints:
-
Still in the Sites panel, select each Virtual Server under both sites.
-
Under the General tab, set the Request Forwarding Endpoint URI. Use the following format:
https://<CPE_ROUTE>/wsi/FNCEWS40MTOM
To find the cpe route:
-
-
In OpenShift, go to Networking → Routes
-
Look for a route like {metadata}-cpe-route (e.g., contentgeo-cpe-route)
-
Append /wsi/FNCEWS40MTOM to the host URL
Repeat for both deployments using their respective CPE routes.
Configure Content Platform Engine Server Communication
The Server Communication URL is required for CPE instances to securely replicate and interact across sites.
-
In ACCE, go to:P8 Domain → Global Configuration → Administration → Sites
-
For each Virtual Server under both sites:
IBM recommends using HTTPS in containerized deployments to secure cross-site communication. This URL is identical to the request forwarding endpoint.
3. (Optional) Enable certificate validation by checking:
Server communication certificate validation → Enabled
4. Click Save to persist all changes.
Configure Navigator Repository for the Second Deployment
After cross-site communication is set up, configure IBM Navigator to connect to the shared object store from the second deployment.
Access Navigator:
1.From the second cluster:
Add Repository:
2. Go to Connections → Repositories → New Repository → FileNet Content Manager
3. Enter:
4. Determine the CPE service endpoint:
-
In OpenShift → Networking → Services
-
Find {metadata}-cpe-stateless-svc (e.g., contentgeo-cpe-stateless-svc)
-
Use the following format for the server URL (remove cluster.local from the hostname):
https://<hostname>:9443/wsi/FNCEWS40MTOM
5. Enter:
- Object Store Symbolic Name: Usually
OS01
- Object Store Display Name: As configured during the first deployment
6. Click Connect → the connection should succeed.
7. Navigate to Desktops → New Desktop and create a new desktop using this repository. Confirm it loads successfully.
Configure Advanced Storage Areas (ASAs) with Content Replication
In a geo-distributed Content Platform Engine (CPE) setup, Advanced Storage Areas (ASAs) allow you to replicate content across multiple sites without needing to mount file systems across a WAN. This ensures content availability, disaster recovery, and optimized access for users in different regions.
Access ACCE in the First Cluster
-
Login to the cluster with the initial CP4BA deployment.
-
Navigate to ConfigMaps → access-info, and open the ACCE (Content Platform Engine Administration Console).
-
In ACCE, go to:Object Store → Object Store Name (e.g., OS01) → Administrative > Advanced Storage
-
Here, you will see the existing Advanced Storage Areas (ASA) defined during initial deployment.
-
Right-click on Advanced Storage Areas → New Advanced Storage Area
-
Provide the following details apart from the default values and click Next to move on:
- Click finish.
Content Replication Configuration
-
Select the ASA corresponding to Site-2.
-
Go to the Devices Tab - Here, you will see the storage device associated with this ASA.
-
Configure Replica Synchronization - For each device, set the following parameters:
At this point, both deployments:
-
Share the same FileNet domain
-
Communicate via request forwarding
-
Trust and replicate content across sites
-
Provide Navigator access to shared object stores
You can now repeat these steps for additional clusters to build a scalable and resilient geo-distributed CP4BA environment.
Reference:
Here’s the IBM Knowledge Center doc for reference.
Credits:
We would like to extend my sincere thanks to the reviewers Adam Davis, Todd Deen, Scott Nguyen, Kevin Trinh, Dheeraj Kumar Krishan, and Justin Wang for their valuable feedback and support throughout the development of this blog.