FileNet Standalone Recipe: GKE
Introduction
Welcome to the latest installment of FileNet Standalone Deployment Recipes! Today, we're going to walk you through the process of deploying and setting up Google Cloud Platform (GCP).
Whether you're a seasoned developer or a beginner, this guide will provide you with a step-by-step process to get started with GCP and IBM FileNet Standalone.
Disclaimer:
AlloyDB used in this recipe was for evaluation purposes only and is currently not officially supported at this time.
The steps and configurations provided in this recipe are for demonstration purposes only and may not be suitable for your specific use case.
Please review and validate the configuration is supported by IBM before using them.
Let's dive in!
This guide will cover the following topics and features:
- Creating a GKE cluster
- Creating a GCP AlloyDB instance
- Setup Filestore multishare for GKE
- Setup GKE Ingress with Google Managed Certificate
- Deploying FileNet on GKE using scripts
Prerequisites:
- A Google Cloud Platform account
- Basic knowledge of GCP and IBM FileNet
- Obtain an IBM Entitlement Registry Key
- Familiarity with command-line tools and terminal commands
- Bastion host with access to GKE
For this recipe, we will be using a 3 node GKE cluster with 4 CPUs and 16 GB of memory per node. This configuration is suitable for a small to medium-sized deployment. You can adjust the number of nodes and resources based on your specific requirements.
We will also be using a GCP AlloyDB instance for our database, which is a fully managed PostgreSQL-compatible database service. This will allow us to easily scale our database as needed.
For authentication will be using an Azure hosted Microsoft Active Directory (AD) server. Setup is not covered in this recipe, but you can find more information on how to set up Azure AD in the official Microsoft documentation.
At the time of writing, the latest version of FileNet Standalone is 5.6.0IF3. This version is required as it includes support for fsGroup
which is needed for GCP FileStore. For more information on FileNet Standalone container 5.6.0IF3, see https://github.com/ibm-ecm/container-samples/releases/tag/v5.6.0.3
Our FileNet Standalone deployment will include the following components:
- Content Engine (CE)
- Content Search Services (CSS)
- GraphQL API
- Content Management Interoperability Services (CMIS)
- Content Navigator (ICN)
- Task Manager (TM)
Component Versions
Component |
Image |
Build Version |
CPE |
ga-560-p8cpe-if003 |
5.6.0-3-219 |
CPE-SSO |
ga-560-p8cpe-if003 |
5.6.0-3-219 |
CSS |
ga-560-p8css-if003 |
5.6.0-2-11 |
GraphQL |
ga-560-p8cgql-if003 |
5.6.0-42 |
External Share |
ga-310-es-if004 |
5.6.0-0-115 |
CMIS |
ga-307-cmis-la702 |
307.009.0143 |
TaskManager |
ga-310-tm-if004 |
310.004.449 |
Navigator |
ga-310-icn-if004 |
310.004.329 |
Navigator-SSO |
ga-310-icn-if004 |
310.004.329 |
Keytool-Init |
24.0.0-IF005 |
24.0.0-IF005 |
Operator |
5.6.0-IF003 |
5.6.0-IF003 |
Environment Setup
Before we begin, make sure you have the following tools installed on your Bastion host:
- Google Cloud SDK (
gcloud
)
- kubectl
- Python 3.9+
- Install Google Cloud SDK Follow the instructions in the official documentation to install the Google Cloud SDK: https://cloud.google.com/sdk/docs/install
-
- Install kubectl
- Follow the instructions in the official documentation to install
kubectl
: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
-
- Install Python 3.9+
- Follow the instructions in the official documentation to install Python: https://www.python.org/downloads/
We will be defining the following session variables:
export PROJECT_ID=<your-gcp-project-id>
export REGION=us-central1
export ZONE=us-central1-a
export GKE_CLUSTER_NAME=fncm-cluster
Configure gcloud CLI:
gcloud config set project $PROJECT_ID
gcloud config set compute/region $REGION
gcloud config set compute/zone $ZONE
Enable the required APIs:
gcloud services enable compute.googleapis.com \
alloydb.googleapis.com \
dns.googleapis.com \
servicenetworking.googleapis.com \
iam.googleapis.com \
container.googleapis.com \
cloudresourcemanager.googleapis.com
Create a GKE Cluster
We will be deploying a Standard GKE cluster using a regional location. This will allow us to have a highly available cluster with multiple zones. The cluster will be created in the us-central1 region. Creating a regional cluster will deploy 3 nodes in each zone, for a total of 9 nodes.
To reduce cost, you can also create a zonal cluster, but this is not recommended for production workloads. A zonal cluster will only have one zone, which means that if that zone goes down, your cluster will be unavailable.
At the time of writing, the latest GKE version is 1.32.3-gke. Refer to the FileNet Standalone SPCR for the latest supported CNCF K8s version.
Cluster configuration:
- Machine type: n1-standard-4 (4 vCPUs, 15 GB memory)
-
- Networking:
-
-
- Control Plan Access:
-
- Access using DNS: Yes
- Access using IPV4: Yes
-
- Cluster networking
-
- Enable HTTP load balancing
-
- Enable Filestore CSI Driver
To create a GKE cluster, follow these steps:
-
Run the following gcloud
command:
gcloud beta container --project "$GCP_PROJECT_ID" clusters create "$GKE_CLUSTER_NAME" \
--region "us-central1" --tier "standard" --no-enable-basic-auth \
--machine-type "n1-standard-4" \
--logging=SYSTEM,WORKLOAD --monitoring=SYSTEM,STORAGE,POD,DEPLOYMENT,STATEFULSET,DAEMONSET,HPA,JOBSET,CADVISOR,KUBELET,DCGM \
--enable-ip-alias --default-max-pods-per-node "110" --enable-dns-access --enable-ip-access --security-posture=standard \
--workload-vulnerability-scanning=disabled --enable-managed-prometheus \
--addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver,GcpFilestoreCsiDriver
-
Wait for the cluster to be created. This may take a few minutes.
-
Once the cluster is created, run the follow the command to connect to the cluster using gcloud and kubectl.:
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GCP_ZONE --project $GCP_PROJECT_ID
-
Verify that you are connected to the cluster by running the following command:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-fncm-cluster-default-pool-1-2c3d4e5f-abc Ready <none> 10m v1.32.3-gke.2000
gke-fncm-cluster-default-pool-1-2c3d4e5f-def Ready <none> 10m v1.32.3-gke.2000
gke-fncm-cluster-default-pool-1-2c3d4e5f-ghi Ready <none> 10m v1.32.3-gke.2000
-
You should see the nodes in the cluster with a status of Ready
. This means that the cluster is up and running and you are connected to it.
Create a GCP AlloyDB Instance
We will be using a GCP AlloyDB instance for our database. This is a fully managed PostgreSQL-compatible database service that allows us to easily scale our database as needed. At the time of writing, the latest AlloyDB version is 16. Refer to the FileNet Standalone SPCR for the latest supported PostgresSQL versions.
For communication from the GKE cluster, we will setting up a Private Service Connect (PSC) endpoint. This will allow us to connect to the AlloyDB instance from the GKE cluster without exposing it to the public internet.
For a full FileNet Standalone deployment, we will 4 databases:
- Content Engine Global Configuration Database (GCD)
- Content Engine Object Store 1 (OS1)
- Content Engine Object Store 2 (OS2)
- Navigator Configuration Database (ICNDB)
For a comprehensive quick-start guide on AlloyDB, please see https://codelabs.developers.google.com/codelabs/psc-alloydb
To create a GCP AlloyDB instance, follow these steps:
-
Run the following gcloud command:
gcloud alloydb clusters create fncms-alloydb \
--password=changeme \
--region=$REGION \
--project=$PROJECT_ID \
--enable-private-service-connect
-
Create a AlloyDB primary instance:
gcloud alloydb instances create fncms-alloydb-primary \
--instance-type=PRIMARY \
--cpu-count=2 \
--availability-type=ZONAL \
--region=$REGION \
--cluster=fncms-alloydb \
--project=$PROJECT_ID \
--allowed-psc-projects=$PROJECT_ID \
--database-flags=max_prepared_transactions=1000
To Setup the Private Service Connect (PSC), follow the steps:
-
Find the VPC subnet CIDR range:
gcloud compute networks subnets describe default \
--region=$REGION --project=$PROJECT_ID \
--format="value(ipCidrRange)"
-
Based on the output range create an internal IP:
gcloud compute addresses create alloydb-psc \
--project=$PROJECT_ID \
--region=$REGION \
--subnet=default \
--addresses=10.164.0.10
-
Confirm that the internal IP was created. Look for the RESERVED status:
gcloud compute addresses list --project=$PROJECT_ID \
--filter="name=alloydb-psc"
-
Get the service attachment URI for the AlloyDB instance:
gcloud alloydb instances describe fncms-alloydb-primary \
--cluster=fncms-alloydb \
--region="$REGION" \
--format="value(pscInstanceConfig.serviceAttachmentLink)" | \
sed 's|.*/projects/|projects/|'
-
Create the PSC endpoint:
gcloud compute forwarding-rules create alloydb-psc-ep \
--address=alloydb-psc \
--project=$PROJECT_ID \
--region=$REGION \
--network=default \
--target-service-attachment=projects/se8c7a64d55e2e1f9p-tp/regions/us-central1/serviceAttachments/alloydb-69f67b56-9fa-alloydb-instance-sa \
--allow-psc-global-access
-
Verify that the endpoint can connect to the service attachment. Look for the ACCEPT status:
gcloud compute forwarding-rules describe alloydb-psc-ep \
--project=$PROJECT_ID \
--region=$REGION \
--format="value(pscConnectionStatus)"
-
Configure a Private DNS managed zone:
gcloud dns managed-zones create alloydb-psc \
--project=$PROJECT_ID \
--dns-name=alloydb.googleapis.com. \
--visibility=private \
--networks=default
-
Obtain the suggested DNS name for the PSC endpoint:
gcloud alloydb instances describe fncms-alloydb-primary \
--cluster=fncms-alloydb --region=$REGION --project=$PROJECT_ID \
--format="value(pscInstanceConfig.pscDnsName)"
-
Create a DNS record set for the PSC endpoint using the DNS name and reserved IP:
gcloud dns record-sets create 69f67b56-9fa2-4480-89f6-3f1a38ee51cb.6a495301-4802-4ec1-bb9b-69b57d2616ec.us-central1.alloydb-psc.goog. \
--project=$PROJECT_ID \
--type=A \
--rrdatas=10.164.0.10 \
--zone=alloydb-dns
-
Make note of the DNS name. This will be used in the FileNet Standalone deployment scripts.
Setup Filestore Multishare for GKE
We will be using GCP FileStore MultiShare for our FileNet Standalone deployment. This is a fully managed file storage service that allows us to easily scale our storage as needed. GKE supplies a CSI driver for GCP FileStore, which allows us to use FileStore as a persistent volume in our GKE cluster. FileStore MultiShare storage class is a new feature that allows us to create multiple PVC's in a single FileStore instance. This is a cost-effective way to use FileStore for our FileNet Standalone deployment.
The minimum size for each PVC is 10GB, as opposed to the 1TB minimum for the other GKE FileStore storage classes.
To create a GCP FileStore MultiShare instance, follow these steps:
-
Create the FileStore MultiShare storage class definition:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fncms-filestore-multishare-rwx
provisioner: filestore.csi.storage.gke.io
parameters:
tier: enterprise
multishare: "true"
max-volume-size: "128Gi"
allowVolumeExpansion: true
volumeBindingMode: Immediate
reclaimPolicy: Retain
-
Apply this storage class definition to the GKE cluster:
kubectl apply -f fncms-filestore-multishare-rwx.yaml
This above storage class has some key points:
- multishare: "true": This enables the multishare feature for the storage class.
- max-volume-size: "128Gi": This sets the maximum size for each PVC to 128GB. This allows for 80 PVC's per Filestore instance.
- allowVolumeExpansion: true: This allows us to expand the size of the PVC if needed.
- reclaimPolicy: Retain: This sets the reclaim policy to retain, which means that the PV will not be deleted when the PVC is deleted. This is important for our FileNet Standalone deployment, as we want to keep the data even if the PVC is deleted.
Make note of the storage class name fncms-filestore-multishare-rwx, as this will be used in the FileNet Standalone deployment scripts.
Create a GCP Ingress Hostname and Public IP
Before we can continue creating the ingress hostname, we need register a domain name. This can be done using any domain registrar, such as Google Domains. For this recipe, we will be using the domain name fncm-standalone.dev. You can use any domain name you like, but make sure to update the DNS records accordingly. For more information on how to register a domain name, refer to the official documentation: https://cloud.google.com/domains/docs/register-domain
We will be using GCP Ingress to expose our FileNet Standalone deployment to the internet. This will allow us to access the FileNet Standalone web applications from outside the GKE cluster. This deployment will leverage container-native load balancing (NEGs), which is best practise for GKE Ingress. Once our deployment is complete, there will be additional steps to configure GCP Ingress.
To create a GCP Ingress hostname, follow these steps:
-
Create a static IP address for the Ingress hostname:
gcloud compute addresses create fncm-ingress-ip \
--global \
--project=$PROJECT_ID
-
If you are using a custom domain name, create a DNS record set for the Ingress hostname using the static IP address:
gcloud dns record-sets create fncm-gke.fncm-standalone.dev. \
--project=$PROJECT_ID \
--type=A \
--rrdatas=$(gcloud compute addresses describe fncm-ingress-ip \
--global \
--format="value(address)") \
--zone=fncm-standalone-dev
Make note of the Ingress hostname fncm-gke.fncm-standalone.dev and name of the IP address fncm-ingress-ip, as this will be used in the FileNet Standalone deployment scripts.
Our environment is now ready for the FileNet Standalone deployment. We will be using a set of scripts to deploy FileNet Standalone on GKE. These scripts will automate the deployment process and make it easier to manage the deployment.
Setting up the Bastion Host
The following scripts will be used to deploy FileNet Standalone on GKE:
deployoperator.py
: This script will deploy the FileNet Standalone operator on GKE.
prerequisites.py
: This script will generate our instruction set for the FileNet Standalone Operator.
You can view all deployment files and scripts in the container-samples GitHub repository: https://github.com/ibm-ecm/container-samples
-
Clone the GitHub repository:
git clone https://github.com/ibm-ecm/container-samples.git
cd container-samples
-
Setup Python virtual environment:
python3 -m venv fncm-env
source fncm-env/bin/activate
pip install -r scripts/requirements.txt
Deploying the FileNet Standalone Operator
The FileNet Standalone operator is a Kubernetes operator that automates the deployment and management of FileNet Standalone on a OCP or CNCF K8s cluster. The operator is responsible for creating and managing the Kubernetes resources needed for FileNet Standalone, such as deployments, services, and persistent volumes. The operator is also responsible for managing the lifecycle of the FileNet Standalone components, such as starting and stopping the components, and scaling the components up or down as needed.
The deployoperator.py
has an interactive and silent mode. For the readability of this recipe, we will be using the silent mode.
-
Fill out the silent mode variables in configuration file silent_config/silent_install_deployoperator.toml
:
LICENSE_ACCEPT = true
PLATFORM = 3
NAMESPACE = "fncms-gke"
ENTITLEMENT_KEY = "<IBMEntitlementKey>"
For more information on how to obtain the IBM Entitlement Key, refer to the official documentation: https://www.ibm.com/docs/en/filenet-p8-platform/5.6.0?topic=cluster-getting-access-images-from-public-entitled-registry
-
To use GKE Filestore multishare, we need to set the fsGroup in the operator deployment. This is a new feature that allows us to use GKE FileStore MultiShare for our FileNet Standalone deployment. The fsGroup
is a Kubernetes security context that allows us to set the group ID for the files in the PVC. This is required for GKE FileStore MultiShare to work properly.
Add the following section in the descriptors/operator.yaml file:
spec:
securityContext:
fsGroup: 0
-
Run the deployoperator.py script to deploy the FileNet Standalone operator on GKE:
cd scripts
python3 deployoperator.py --silent
-
Verify that the operator is running by checking the status of the operator deployment:
kubectl get pods -n fncms-gke
NAME READY STATUS RESTARTS AGE
ibm-fncm-operator-7f444c769c-rrqxx 1/1 Running 0 10m
-
Create the FileNet Operator PVC. This PVC is usually created during deployment, however GKE FileStore MultiShare requires a minumum size of 10GB
for each PVC. Create the PVC definition operator-shared-pvc.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: operator-shared-pvc
labels:
app.kubernetes.io/instance: ibm-fncm
app.kubernetes.io/managed-by: ibm-fncm
app.kubernetes.io/name: ibm-fncm
release: 5.6.0
spec:
accessModes:
- ReadWriteMany
storageClassName: fncms-filestore-multishare-rwx
resources:
requests:
storage: 10Gi
-
Apply this PVC definition to the GKE cluster:
kubectl apply -f operator-shared-pvc.yaml -n fncms-gke
-
Verify that the PVC is created and bound to a PV:
kubectl get pvc -n fncms-gke
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
operator-shared-pvc Bound pvc-12345678-1234-1234-1234-123456789012 10Gi RWX fncms-filestore-multishare-rwx 10m
Creating the Google Ingress Configuration and Managed SSL Certificate
We will be using a Google Managed SSL certificate for our Ingress hostname. This will allow us to use HTTPS for our FileNet Standalone deployment without having to manage the SSL certificates ourselves. The Google Managed SSL certificate will automatically renew the SSL certificate when it is about to expire. This is a great feature that saves us from having to manually renew the SSL certificate. We will be configuring the Ingress to redirect HTTP traffic to HTTPS. This will ensure that all traffic to our FileNet Standalone deployment is secure.
-
Create a Google Managed SSL certificate definition fncm-managed-cert.yaml
:
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: fncm-managed-cert
namespace: fncms-gke
spec:
domains:
- fncm-gke.fncm-standalone.dev
-
Apply this managed certificate definition to the GKE cluster:
kubectl apply -f fncm-managed-cert.yaml -n fncms-gke
-
Create a Google Ingress Frontend definition fncm-ingress-fe-config.yaml
:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: fncm-ingress-fe-config
spec:
redirectToHttps:
enabled: true
responseCodeName: PERMANENT_REDIRECT
-
Apply this frontend definition to the GKE cluster:
kubectl apply -f fncm-ingress-fe-config.yaml -n fncms-gke
Make note of the Ingress Frontend name fncm-ingress-fe-config
and Managed SSL certificate name fncm-managed-cert
, as these will be used in the FileNet Standalone deployment scripts.
Creating the FileNet Standalone Deployment Custom Resource and Database SQL
The prerequisites.py
script will generate the SQL scripts needed to create the FileNet Standalone databases and the Kubernetes custom resource (CR) for the FileNet Standalone deployment. The SQL scripts will be used to create the FileNet Standalone databases in the AlloyDB instance. The CR will be used to deploy FileNet Standalone on GKE. The prerequisites.py
script has an interactive and silent mode. For the readability of this recipe, we will be using the silent mode.
The prerequisites.py
script has 3 distinct modes:
gather
- This mode will gather the information needed to create the FileNet Standalone deployment propertyFiles. These will need to filled out.
generate
- This mode will generate the SQL scripts and Custom Resource needed to create the databases in the AlloyDB instance and the Kubernetes custom resource (CR) for the FileNet Standalone deployment.
validate
- This mode will validate the storage class, database and LDAP connection, users and groups. This mode is optional -- but is a good sanity check before deployment.
-
Change the following out the silent mode variables in configuration file silent_config/silent_install_prerequisites.toml
:
FNCM_VERSION = 4
LICENSE = "FNCM.PVUProd"
PLATFORM = 3
INGRESS = true
AUTHENTICATION = 1
RESTRICTED_INTERNET_ACCESS = false
FIPS_SUPPORT = false
CSS = true
CMIS = true
TM = true
CPE = true
GRAPHQL = true
BAN = true
ES = false
IER = false
ICCSAP = false
DATABASE_TYPE = 4
DATABASE_SSL_ENABLE = true
DATABASE_OBJECT_STORE_COUNT = 2
[LDAP]
LDAP_TYPE = 2
LDAP_SSL_ENABLE = true
-
Run the prerequisites.py
script in gather mode to create the propertyFiles folder:
cd scripts
python3 prerequisites.py --silent gather
-
Fill out the propertyFiles in the propertyFiles
folder. The following files need to be filled out:
-
- fncm_db_server.toml:
-
- DATABASE_SERVERNAME: The PSC endpoint URI of the AlloyDB instance. This is the URI that was created in the previous step.
- DATABASE_PORT: The port number for the AlloyDB instance. This is usually 5432.
- DATABASE_USERNAME and DATABASE_PASSWORD: The SQL scripts will have statements to create these users. You may you an existing user, and comment out the SQL statements before applying.
-
- fncm_ldap_server.toml:
-
- Fill out the LDAP properties. This is required for authentication. This is not covered in this recipe, but you can find more information on how to set up LDAP in the official IBM documentation.
-
- fncm_user_group.toml:
-
- FNCM_LOGIN_USER and FNCM_LOGIN_PASSWORD: This is a user that exists in your LDAP.
- ICN_LOGIN_USER and ICN_LOGIN_PASSWORD: This is a user that exists in your LDAP.
- GCD_ADMIN_USER_NAME: This is a user that exists in your LDAP.
- GCD_ADMIN_GROUPS_NAME: This is a group that exists in your LDAP. The admin user must be a member of this group.
- CPE_OBJ_STORE_OS_ADMIN_USER_GROUPS: List of users and groups that will have admin access to the object store. This is a comma separated list of users and groups.
-
- fncm_ingress.toml:
-
- INGRESS_HOSTNAME: This is the DNS name that was created in the previous step.
- INGRESS_ANNOTATIONS = ['kubernetes.io/ingress.class: gce','networking.gke.io/managed-certificates: fncm-managed-cert', 'kubernetes.io/ingress.global-static-ip-name: fncm-ingress-ip', 'networking.gke.io/v1beta1.FrontendConfig: fncm-ingress-fe-config']
-
- fncm_deployment.toml:
-
- SLOW_FILE_STORAGE_CLASSNAME, MEDIUM_FILE_STORAGE_CLASSNAME, FAST_FILE_STORAGE_CLASSNAME: The storage class name that was created in the previous step.
- There are 3 storage classes, we will be using the same storage class for all 3.
-
Provide the SSL certificates
Important: There is a known issue for the AlloyDB instance SSL certificates. AlloyDB manages its own SSL certificates, and the CA certificate is not available for download. Instead, you will need to use a dummy SSL certificate to make sure the SSL connection is established. All SSL certificates are stored in the propertyFiles/ssl-certs
folder. Copy the SSL certificates each folder for the database and LDAP.
The below command will create a self-signed SSL certificate for the AlloyDB instance. This is a dummy certificate that will be used to establish the SSL connection:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes -subj "/CN=Dummy Certificate"
cp cert.pem propertyFiles/ssl-certs/gcd/serverca
cp cert.pem propertyFiles/ssl-certs/os/serverca
cp cert.pem propertyFiles/ssl-certs/os2/serverca
cp cert.pem propertyFiles/ssl-certs/icn/serverca
cp <yourLDAPCert>.pem propertyFiles/ssl-certs/ldap
Important: All certificates must be in PEM format. If you have a different format, you will need to convert it to PEM format.
-
Run the prerequisites.py
script in generate mode to create the SQL scripts and CR for the FileNet Standalone deployment:
python3 prerequisites.py generate
-
- To be able to use GKE Filestore multishare, we need to adjust the generated CR file.
-
The CR will be created in the generatedFiles/ibm_fncm_cr_production.yaml
file.
-
Run the SQL scripts to create the FileNet Standalone databases in the AlloyDB instance. The SQL scripts will be created in the generatedFiles/database
folder. The SQL scripts are named as follows:
- createGCD.sql: This script will create the GCD database.
- createICN.sql: This script will create the ICN database.
- createos.sql: This script will create the OS1 database.
- createos2.sql: This script will create the OS2 database.
- Use the AlloyDB Studio to run the SQL scripts. You can also use any SQL client that supports PostgreSQL.
-
For more information on AlloyDB Studio, refer to the official documentation: https://cloud.google.com/alloydb/docs/manage-data-using-studio
Deploying and Validating your FileNet Standalone Deployment
The prerequisites.py
script can validate your FileNet Standalone Custom Resource and configuration. This is a good sanity check before deployment. This is an optional step, but it is recommended to run this before deployment. If the validation passes, you will get an option to apply all the secrets and CR to the GKE cluster.
-
Run the prerequisites.py
script in validate
mode to validate the FileNet Standalone deployment. A flag was added to adjust the PVC size, as GKE FileStore requires a 10Gi
minimum size for each PVC.
python3 prerequisites.py validate --pvc-size 10Gi
-
If you do not want to apply through the script, run the following command to apply the secrets and CR to the GKE cluster manually:
kubectl apply -f generatedFiles/secrets -n fncms-gke
kubectl apply -f generatedFiles/ssl -n fncms-gke
kubectl apply -f generatedFiles/ibm_fncm_cr_production.yaml -n fncms-gke
-
The FileNet Operator will start creating the FileNet Standalone component and K8s artifacts. This can take up to 30 minutes to complete, depending PVC creation speed and image download time.
-
Verify that the FileNet Standalone components are running by checking the status of the pods in the fncms-gke
namespace:
kubectl get pods -n fncms-gke
NAME READY STATUS RESTARTS AGE
ibm-fncm-operator-7f444c769c-rrqxx 1/1 Running 0 10m
fncm-cpe-0 1/1 Running 0 10m
fncm-css-0 1/1 Running 0 10m
fncm-cmis-0 1/1 Running 0 10m
fncm-graphql-0 1/1 Running 0 10m
fncm-tm-0 1/1 Running 0 10m
fncm-navigator-0 1/1 Running 0 10m
-
Verify that the initialization and verification has completed. This information is stored in two configmaps.
- fncmdeploy-verification-config
- fncmdeploy-initialization-config
Check the status of the configmaps:
kubectl get configmaps -n fncms-gke
NAME DATA AGE
fncmdeploy-initialization-config 4 10m
fncmdeploy-verification-config 4 10m
kubectl describe configmaps fncmdeploy-initialization-config -n fncms-gke
kubectl describe configmaps fncmdeploy-verification-config -n fncms-gke
-
- Verify the status of the FileNet Standalone components and overall deployment by checking the status of the CR.
-
The CR will contain the status of the deployment and the status of each component:
kubectl get fncmcluster fncmdeploy -n fncms-gke
NAME AGE
fncmdeploy 1h
kubectl describe fncmcluster fncmdeploy -n fncms-gke
At this point in the recipe, we have successfully deployed FileNet Standalone on GKE using AlloyDB and GKE FileStore Multishare. Our last steps are to configure the GCP Ingress and Managed SSL certificate.
GCP Ingress and Managed SSL Certificate Post Deployment
There are some additional steps to configure the GCP Ingress and Managed SSL certificate after the FileNet Standalone deployment is complete.
Our ingress should be created by the operator and configured with the static IP address and managed SSL certificate. The managed SSL certificate will only be issued once the ingress is publicly accessible.
The GCP Ingress acts as a load balancer between your service and clients. Clients can connect using HTTP or HTTPS, we will configure the HTTP requests to be redirected, however the load balancer proxy communicates with the service over HTTP by default. We will be annotating the service to force HTTPS connection from the ingress.
GCE ingress also uses a health check to determine if the backend service is healthy. This default health check is a TCP health check on port 80. We need to create a custom health check for our FileNet Standalone deployment services.
-
Confirm the ingress was created by the operator. This should be using our created DNS entry and static IP:
kubectl get ingress -n fncms-gke
NAME CLASS HOSTS ADDRESS PORTS AGE
fncmdeploy-ingress gce fncm-gke.fncm-standalone.dev 35.123.456.789 80, 443 10m
-
Check the status of the managed SSL certificate. This should be Active:
kubectl describe managedcertificate fncm-ingress-cert
Name: fncm-ingress-cert
API Version: networking.gke.io/v1
Kind: ManagedCertificate
(...)
Spec:
Domains:
fncm-gke.fncm-standalone.dev
Status:
CertificateStatus: Active
-
Create a custom health check for the FileNet Standalone deployment services: fncm-health-check.yaml
Each service needs to be annotated with the health check. The requestPath
and port will be same as the readiness probe used by the component pods.
# CPE
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: cpe-backend-config
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 5
type: HTTPS
requestPath: /FileNet/Engine?statusCheck=live
port: 9443
---
# CMIS
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: cmis-backend-config
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 5
type: HTTPS
requestPath: /openfncmis_wlp/ping.jsp
port: 9443
---
# Navigator
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: nav-backend-config
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 5
type: HTTPS
requestPath: /navigator/jaxrs/api/health
port: 9443
---
# TaskManager
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: tm-backend-config
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 5
type: HTTPS
requestPath: /taskManagerWeb/ping
port: 9443
---
# GraphQL
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: graphql-backend-config
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 5
type: HTTPS
requestPath: /content-services-graphql/ping
port: 9443
customRequestHeaders:
headers:
- "Cookie:ECM-CS-XSRF-Token=1"
- "ECM-CS-XSRF-Token:1"
-
Apply this fncm-health-check
definition to the GKE cluster:
kubectl apply -f fncm-health-check.yaml -n fncms-gke
-
Annotate the services with the health check:
kubectl annotate service fncmdeploy-cpe-svc beta.cloud.google.com/backend-config='{"default": "cpe-backend-config"}'
kubectl annotate service fncmdeploy-cpe-stateless-svc beta.cloud.google.com/backend-config='{"default": "cpe-backend-config"}'
kubectl annotate service fncmdeploy-cmis-svc beta.cloud.google.com/backend-config='{"default": "cmis-backend-config"}'
kubectl annotate service fncmdeploy-navigator-svc beta.cloud.google.com/backend-config='{"default": "nav-backend-config"}'
kubectl annotate service fncmdeploy-graphql-svc beta.cloud.google.com/backend-config='{"default": "graphql-backend-config"}'
kubectl annotate service fncmdeploy-tm-svc beta.cloud.google.com/backend-config='{"default": "tm-backend-config"}'
- Annotate the services to label the
https
port, to force HTTPS traffic from the load balancer.
kubectl annotate service fncmdeploy-cpe-svc cloud.google.com/app-protocols='{"https":"HTTPS"}'
kubectl annotate service fncmdeploy-cpe-stateless-svc cloud.google.com/app-protocols='{"https":"HTTPS"}'
kubectl annotate service fncmdeploy-cmis-svc cloud.google.com/app-protocols='{"https":"HTTPS"}'
kubectl annotate service fncmdeploy-navigator-svc cloud.google.com/app-protocols='{"https":"HTTPS"}'
kubectl annotate service fncmdeploy-graphql-svc cloud.google.com/app-protocols='{"https":"HTTPS"}'
kubectl annotate service fncmdeploy-tm-svc cloud.google.com/app-protocols='{"https":"HTTPS"}'
-
We need to import the managed SSL certificate into the FileNet Standalone deployment. This is required for the FileNet Standalone deployment to use the managed SSL certificate for HTTPS.
Download the managed SSL certificate from the URL using openssl, and save to a file:
echo | openssl s_client -showcerts -connect fncm-gke.fncm-standalone.dev:443 2>&1 </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > gke-ingress.crt
-
Create a secret for the managed SSL certificate:
kubectl create secret generic gke-ingress --from-file=tls.crt=gke-ingress.crt
-
Update the CR with the secret name. The secret name needs to added to the trusted certificate list in the CR:
kubectl edit fncmcluster fncmdeploy -n fncms-gke
spec:
shared_configuration:
trusted_certificate_list: [gke-ingress]
-
Wait for the FileNet Standalone deployment to be updated. This can take up to 20 minutes. You will see the component pods restarting as the truststores are updated.
-
Obtain your access URL for the FileNet Standalone deployment.
You can use any of the URLs to access your FileNet Standalone deployment. All component URLs are listed in a configmap: fncmdeploy-fncm-access-info
kubectl get configmap fncmdeploy-fncm-access-info -n fncms-gke -o yaml
-
Use the following credentials to access the FileNet Standalone deployment. These are the same credentials that were used in your propertyFiles
in the previous step.
- FNCM_LOGIN_USER and FNCM_LOGIN_PASSWORD
- ICN_LOGIN_USER and ICN_LOGIN_PASSWORD
You have now successfully deployed FileNet Standalone on GKE using AlloyDB and GKE FileStore MultiShare!
If you have any questions or issues, please feel free to reach out to the FileNet Standalone community on GitHub: https://github.com/ibm-ecm/container-samples
You can also find more information on how to deploy FileNet Standalone on GKE in the official documentation: https://www.ibm.com/docs/en/filenet-p8-platform/5.6.0?topic=using-containers
Happy Containerizing!
Credits:
- Author: Jason Kahn
- Reviewer: Kevin Trinh, Joseph Krenek