Cognos Analytics

Cognos Analytics

Connect, learn, and share with thousands of IBM Cognos Analytics users! 

 View Only

Running IBM Cognos Analytics on Kubernetes: A Complete Guide

By Anand kushwaha posted 24 days ago

  

Deploying enterprise BI on a local Kubernetes cluster—what we learned and exactly how to do it.


Why put Cognos on Minikube?

Many teams run IBM Cognos Analytics on traditional VMs or commercial Kubernetes. Before committing to a full cluster or cloud migration, it helps to validate the containerised stack locally: images, databases, storage, and integrations. Minikube is a good fit for that. It gives you a real Kubernetes API, real workloads, and a path that translates to larger clusters later.

We set out to get Cognos up on Minikube, connected it to docker PostgreSQL container, and run a real report against an external database. This blog post summarises what we did, what we’d do again—and the full step-by-step so you can reproduce it.


The goal

  • Run IBM Cognos Analytics (certified containers from IBM) on Minikube.
  • Use PostgreSQL for Cognos’ own databases (content store, audit, notification).
  • Persist data and drivers in a way that survives Minikube restarts.
  • Connect Cognos to an external PostgreSQL (e.g. Pagila) and run a report.

No OpenShift, no cloud—just Minikube on a single host and a repeatable path for others to follow.


High-level architecture: Host → Minikube (Cognos pods) + external Postgres (Docker on host) for content/audit/nc DB and report data source.













Prerequisites

Before you start, have the following in place:

  • Minikube (e.g. v1.35.0) with Docker driver
  • kubectl and Helm installed and configured for the cluster
  • IBM Container Registry access to icr.io/ecabeta (API key from IBM Cloud; request access from IBM)
  • IBM Cognos Helm chart (e.g. ibm-cacc-prod 1.1.1) — from IBM or your internal repo
  • Resources: 32 GB RAM and 16 CPUs recommended for the Minikube VM


Part 1: What we had to sort out

Understanding these upfront will save you time.

1. Vanilla Kubernetes, not OpenShift

The official material and scripts often assume OpenShift: oc, Routes, and OpenShift-specific security. For Minikube we had to use kubectl, a LoadBalancer service instead of Routes, and set openshiftContext: false in the Helm overrides. We also created a deployment script using  kubectl (e.g. deployecabeta-kubectl.sh).

2. Registry access

Pulling from icr.io/ecabeta requires an IBM Cloud API key with access to that registry. We created a regcred docker-registry secret in the Cognos namespace so all pods could pull the images. You will need an API key from IBM for this. If you don’t have access, you may need to contact IBM Sales for your POC.

3. Image tags and digests

The chart’s default or example values referenced tags that didn’t exist (e.g. latest). Check IBM Cloud Container Registry, pinned digests in the file override-minikube.yaml to make sure you are using right versions and diegest of container image.

4. Databases: PostgreSQL

We wanted opensource PostgreSQL DB as external DB. That meant three Postgres instances (Content Store, Audit, Notification), run outside Minikube via Docker Compose so data survives cluster resets. In the override we set contentDbClass, auditDbClass, ncDbClass to "PostgreSQL" and pointed hostnames at the Kubernetes service names for our Postgres (e.g. ca-cs, ca-audit-store).

5. Storage: hostPath vs SMB

Minikube’s default storage lives inside the minikube container /tmp folder, if you delete the minikube cluster you lose all data since. We wanted to inject the PostgreSQL JDBC driver into the Dataset Service (DSS) and keep data persistent, so we introduced an SMB-backed StorageClass. A small SMB server in a container on the host, plus the SMB CSI driver in the cluster, let us mount host-backed PVCs. We could then wget the driver jar into the DSS drivers directory.

If you’re only experimenting, the default standard StorageClass is fine, just expect data loss when the cluster goes away.

6. The missing PostgreSQL driver in DSS

The Dataset Service didn’t have the PostgreSQL JDBC driver on the classpath. We saw CNC-SDS-0260 Unable to locate the class org.postgresql.Driver. The chart supports a drivers directory via PVC, we put postgresql-42.7.3.jar there and recreated the DSS pod. Reporting and Content Manager already had the driver, only DSS needed this step.

7. Reaching Cognos

On Minikube, the LoadBalancer often stays <pending> for external IP. We used kubectl port-forward to ca-ingress-lb (ports 9300 and 8090) and opened http://localhost:9300/bi/ in the browser. Since we deployed on remote machine we used SSH port forwarding.



Part 2: Step-by-step deployment

Follow these steps in order to reproduce the setup.

Please find command and configuration files refered below in the following repository Amvara Cognos Kubernetes(Minikube) Deployment.


Step 1: Start Minikube

Start the cluster with enough resources. When running as root with the Docker driver, --force may be required:

minikube start --driver=docker --memory=36864 --cpus=16 --cni=cilium --force

Verify:

minikube status
kubectl get nodes

Enable ingress

minikube addons enable ingress

Start dashboard (handy for debugging):

minikube dashboard --url   # use SSH tunnel if accessing remotely


Screenshot of kubectl get nodes and minikube status shows cluster is Ready.


Step 2: IBM Container Registry access

Get an IBM Cloud API key from https://cloud.ibm.com/iam/apikeys with access to the Cognos container registry (icr.io). Request access from IBM if you do not have it

Screenshot of IBM Cloud Console showing where to create the API key (e.g. IAM → API keys), or a successful docker pull of an cognos container image.

  1. Create the Kubernetes pull secret in the Cognos namespace:

export CP_REPO_USERNAME="<your-user>"
export CP_REPO_PASSWORD="<your-ibm-api-key>"
export CP_REPOSITORY=icr.io/ecabeta
export CLUSTER_NAMESPACE=cognos-ns

kubectl create namespace ${CLUSTER_NAMESPACE}
kubectl create secret docker-registry regcred \
  --docker-server=${CP_REPOSITORY} \
  --docker-username=${CP_REPO_USERNAME} \
  --docker-password=${CP_REPO_PASSWORD} \
  -n ${CLUSTER_NAMESPACE}

Verify with: docker login -u ${CP_REPO_USERNAME} -p ${CP_REPO_PASSWORD} icr.io and pull an image, e.g. icr.io/ecabeta/elastic-ca-cm:latest.


Step 3: PostgreSQL databases for Cognos (Docker Container + K8 External Service)

Cognos needs three databases: Content Store (cm), Audit, and Notification (nc). Run them outside the cluster (e.g. on the host) so data survives Minikube resets.

Three Postgres containers (content, audit, nc) on the host with ports, and Kubernetes External Services pointing to them,

1. Use PostgreSQL with Docker Compose on the host

Use a docker-compose-postgres.yml that defines three Postgres 16 services (content, audit, nc), each with a dedicated database and port (e.g. 25432, 25433, 25434 mapped to 5432). 

 docker compose -f docker-compose-postgres.yml up -d

docker ps showing the three postgres containers.

2. Deploy External services on Kubernetes cluster

Deploy postgres-external-endpoints.yaml which creates external service to access docker postgres containers from inside Kubenetes cluster.

Services + Endpoints so cluster can reach Postgres running on host (docker-compose).

kubectl apply -f postgres-external-endpoints.yaml


Note: Host IP from inside Minikube: update if your host.minikube.internal differs 

minikube ssh -- getent hosts host.minikube.internal


Step 4: Storage for Minikube — SMB (optional but recommended)

Minikube’s default standard (hostPath) storage stores PVC data inside the Minikube VM. If the VM is deleted, that data is lost. For a more durable setup and to inject JDBC drivers easily, use an SMB-backed StorageClass.

Diagram: Host (SMB server + volume) Minikube (SMB CSI driver → PVCs for PowerCube and DSS drivers). Or screenshot of kubectl get storageclass showing smb and kubectl get pvc with PVCs bound to it.

  1. Run an SMB server (e.g. in a container on the host) using docker-compose-smb.yml that exposes an SMB share and keeps data in a host volume (e.g. cognos_poc_smb-data).

    docker compose -f docker-compose-smb.yml up -d

  2. Install the SMB CSI driver in the cluster and create a StorageClass named smb (see smb-storage.yaml). Ensure the driver can mount the SMB share from the cluster.

    helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
    helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system


  3. Use this StorageClass in the Cognos override for PowerCube and, if used, the DSS drivers PVC (see Step 6).

If you skip SMB, use the default standard StorageClass and accept that PVC data (and any drivers you put there) are tied to the Minikube VM.


Step 5: Create Kubernetes secrets and deploy

Secrets must match the database users and passwords used in your Postgres setup and in the override file  override-minikube.yaml

Find a single script deployecabeta-kubectl.sh that creates the namespace, regcred, and these secrets, then runs Helm, you can use that instead—just ensure it uses kubectl and your override file. If you use a script, run it from the repo root (or the path where the chart and override live).

# Replace usernames/passwords with those used in your Postgres setup
kubectl create secret generic ca-cs-credentials-secret \
  --from-literal=username="postgres" --from-literal=password="<content-db-password>" \
  --type=kubernetes.io/basic-auth -n ${CLUSTER_NAMESPACE}

kubectl create secret generic ca-audit-credentials-secret \
  --from-literal=username="postgres" --from-literal=password="<audit-db-password>" \
  --type=kubernetes.io/basic-auth -n ${CLUSTER_NAMESPACE}

kubectl create secret generic ca-nc-credentials-secret \
  --from-literal=username="postgres" --from-literal=password="<nc-db-password>" \
  --type=kubernetes.io/basic-auth -n ${CLUSTER_NAMESPACE}

# Placeholders; adjust if you use mail/LDAP
kubectl create secret generic ca-mailserver-credentials-secret \
  --from-literal=username="" --from-literal=password="" \
  --type=kubernetes.io/basic-auth -n ${CLUSTER_NAMESPACE}

kubectl create secret generic ca-ldapbind-credentials-secret \
  --from-literal=username="" --from-literal=password="" \
  --type=kubernetes.io/basic-auth -n ${CLUSTER_NAMESPACE}


Step 6: Helm override for Minikube and PostgreSQL

If used deployecabeta-kubectl.sh, it's should have been done

Use an override file (e.g. override-minikube.yaml) that:

    • Ingress: createRoute: false, createLoadBalancer: true
    • Security: openshiftContext: false
    • Storage: powerCube (and optional DSS drivers PVC) use your StorageClass (smb or standard)
    • Content Manager / databases:
      • contentDbClass, auditDbClass, ncDbClass: "PostgreSQL"
      • contentDbHostname, auditDbHostname, ncDbHostname: Kubernetes service names (e.g. ca-cs, ca-audit-store; NC can use ca-cs if on the same server)
      • Ports: 5432; set *PostgreSqlSchema: "public" if required by the chart
    • Images: Pin tags/digests that exist in icr.io/ecabeta (e.g. elastic-ca-cm:jds3, correct digest for elastic-ca-reporting). Verify at IBM Cloud Container Registry.
    • POC convenience: aaaAllowAnonymous: true if desired
    • DSS drivers: If using a drivers PVC, set driversDirPVCenabled: true and the same StorageClass as for PowerCube

Apply the chart:

helm upgrade --install ca-instance ./ibm-cacc-prod -f ./override-minikube.yaml \
  --version 1.1.1 \
  --namespace ${CLUSTER_NAMESPACE} \
  --wait --timeout 15m

Screenshot of helm status ca-instance or kubectl get pods -n cognos-ns showing all Cognos pods (e.g. ca-cpd-cm-primary-0, ca-dss, ca-reporting, ca-ui, caproxy-frontdoor) in Running state.

root@amvara8 /root  10:44:51 # helm status ca-instance
NAME: ca-instance
LAST DEPLOYED: Wed Mar 11 12:05:08 2026
NAMESPACE: caccocp1
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
RESOURCES:
==> v1/ServiceAccount
NAME             SECRETS   AGE
cognos-account   0         14d

==> v1/ConfigMap
NAME                  DATA   AGE
audit-db-connection   8      14d
bi-service-disables   3     14d
ca-cm-only-config   83    14d
ca-container-versions   9     14d
ca-options-global-constant   9     14d
ca-options-global   10    14d
ca-options-no-disp   1     14d
ca-options-with-disp   1     14d
content-db-connection   8     14d
flipper-globals   5     14d
mailserver-config   5     14d
nc-db-connection   8     14d
odbc-ini   1     14d
spiffe-global   6     14d

==> v1/PersistentVolumeClaim
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
ca-pvc-powercube-dir   Bound    pvc-8b87cf22-4019-45be-889f-69d028731265   10Gi       RWO            smb            <unset>                 14d
dss-pvc-drivers-dir   Bound   pvc-7705451c-21fb-4e7e-9f1c-37b71ee2fc94   10Gi   RWO   smb   <unset>   14d

==> v1/Deployment
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
caproxy-frontdoor   1/1     1            1           14d
ca-dss   1/1   1     1     14d
ca-reporting   1/1   1     1     14d
ca-rest   1/1   1     1     14d
ca-smarts   1/1   1     1     14d
ca-ui   1/1   1     1     14d

==> v1/Role
NAME          CREATED AT
cognos-role   2026-03-11T11:05:09Z

==> v1/RoleBinding
NAME                   ROLE               AGE
caproxy-role-binding   Role/cognos-role   14d

==> v1/Service
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
caproxy-frontdoor-service   ClusterIP   10.110.81.210   <none>        9300/TCP   14d
caproxy-metrics-service   ClusterIP   10.101.6.244   <none>   8090/TCP   14d
ca-cpd-cm-primary-0   ClusterIP   10.102.13.113   <none>   9393/TCP,9392/TCP   14d
ca-dss   ClusterIP   10.96.67.138   <none>   9393/TCP,9392/TCP   14d
flippers   ClusterIP   None   <none>   5701/TCP,4300/TCP   14d
ca-ingress-lb   LoadBalancer   10.96.241.200   <pending>   9300:31141/TCP,8090:32580/TCP   14d
ca-reporting   ClusterIP   10.104.146.226   <none>   9393/TCP,9392/TCP   14d
ca-rest   ClusterIP   10.97.41.56   <none>   9393/TCP,9392/TCP   14d
ca-smarts   ClusterIP   10.110.9.50   <none>   9393/TCP,9392/TCP   14d
ca-ui   ClusterIP   10.108.222.149   <none>   9393/TCP,9392/TCP   14d
zipkin   ClusterIP   None   <none>   9411/TCP   14d

==> v1/Pod(related)
NAME                                 READY   STATUS    RESTARTS   AGE
caproxy-frontdoor-545d54875c-j272c   1/1     Running   0          14d
ca-dss-5dfcfcc756-4mbp7   2/2   Running   0     14d
ca-reporting-795d8bc4d-ssrh9   2/2   Running   0     14d
ca-rest-54f844d745-fw6sr   2/2   Running   0     14d
ca-smarts-6f54c6585c-7fk5t   2/2   Running   0     14d
ca-ui-6ffbbb7766-v2c5g   2/2   Running   0     14d
ca-cpd-cm-primary-0   1/2   Running   256 (6m32s ago)   14d

==> v1/StatefulSet
NAME                READY   AGE
ca-cpd-cm-primary   0/1     14d


TEST SUITE: None
NOTES:
Thank you for installing ibm-cacc-prod.

Your release is named ca-instance.

To learn more about the release, try:

  $ helm status ca-instance
  $ helm get all ca-instance
root@amvara8 /root  10:48:31 #


Step 7: PostgreSQL JDBC driver for DSS

The Dataset Service (DSS) loads JDBC drivers from a mounted directory (e.g. via PVC dss-pvc-drivers-dir). If that PVC is backed by SMB, the backing directory is on the host.

    1. Find the PVC’s backing directory

      Run kubectl get pvc, find dds-** pvc get the VOLUME id (i.e pvc-UUID),

      Checked docker volume inspect cognos_poc_smb-data get the Mountpoint path

      MountpointPath/pvc-***UUID*** is the place where you should place your drivers

    2. Download the PostgreSQL JDBC driver into that directory:

      wget https://jdbc.postgresql.org/download/postgresql-42.7.3.jar -P /path/to/pvc-drivers-dir/
      chmod 644 /path/to/pvc-drivers-dir/postgresql-42.7.3.jar

    3. Restart the DSS pod so it picks up the driver:

      kubectl delete pod -l app=ca-dss -n ${CLUSTER_NAMESPACE}


Step 8: Access Cognos

Minikube typically does not assign an external IP to the Cognos LoadBalancer service. Use port-forward to reach Cognos:

kubectl port-forward -n ${CLUSTER_NAMESPACE} svc/ca-ingress-lb 9300:9300

    • Cognos UI: http://localhost:9300/bi/ (or http://<host>:9300/bi/ if you use a tunnel)

For remote access, use SSH port forwarding for 9300 (and 8090 if needed) to your workstation.

Screenshot of the Cognos Analytics login or home page in the browser at http://localhost:9300/bi/.


Step 9: Connect Cognos to an external database for reports

To use an external PostgreSQL (e.g. Pagila or another sample DB) as a sample data source for reports: deploy it using docker-compose-postgres-pagila.yml

  1. Ensure the database is reachable from the cluster (host IP, port, firewall).
  2. In Cognos, create a data source connection (JDBC) with server/host, port (e.g. 15432), database name, user, and password.
  3. The Reporting service uses its own JDBC drivers; if the image includes postgresql-*.jar, the connection test should succeed. If not, add the driver via image extension or mounted volume per IBM’s documentation.


Step 10: Optional — run port-forward and dashboard in the background

To keep port-forward and dashboard running:

nohup kubectl port-forward -n caccocp1 svc/ca-ingress-lb 8090:8090 9300:9300 &

Use SSH tunnels for ports 9300 and the dashboard port when accessing from another machine.

minikube dashboard --url

i.e. http://127.0.0.1:34849/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/


What’s running when it works

After a successful deploy you’ll see pods for:

    • ca-cpd-cm-primary (StatefulSet) – Content Manager + caproxy sidecar
    • ca-dss – Dataset Service
    • ca-reporting – Reporting
    • ca-rest – REST API
    • ca-smarts – Smarts
    • ca-ui – UI
    • caproxy-frontdoor – Ingress proxy

Services like ca-cs, ca-audit-store, and the LoadBalancer ca-ingress-lb tie them together. You may see log messages about Jaeger or Instana if the chart is configured to send traces but those aren’t deployed; that’s safe to ignore for a POC.

Screenshot of kubectl get all  or a Kubernetes dashboard view showing all Cognos pods and services.


Troubleshooting

 Issue What to check
 ImagePullBackOff Registry secret regcred, correct image tag/digest in override, and API key with access to icr.io/ecabeta.
 CNC-SDS-0260 … org.postgresql.Driver Add postgresql-42.7.3.jar (or compatible) to the DSS drivers PVC directory and restart the DSS pod.
 Startup probe failed / 502 Content Manager (ca-cpd-cm-primary) not ready: check DB connectivity (ca-cs, ca-audit-store, nc), secrets, and Postgres logs.
 Jaeger/Instana errors Optional. Caproxy tries to send traces; safe to ignore if you don’t deploy them.
 LoadBalancer EXTERNAL-IP pending Normal on Minikube. Use kubectl port-forward to access services.


Takeaways

  1. Use a Minikube-specific override — LoadBalancer, openshiftContext: false, correct image tags/digests, and PostgreSQL config. Don’t rely on default OpenShift-oriented values.
  2. Run Cognos DBs outside the cluster — Postgres in Docker (or on the host) keeps data and makes recovery easier if you recreate Minikube.
  3. Plan storage up front — If you need durable data or to inject drivers, SMB (or another external StorageClass) is worth the one-time setup.
  4. Pin images — Check the registry and pin by tag or digest so deploys are reproducible.
  5. DSS needs the JDBC driver — Put the PostgreSQL jar in the drivers PVC and restart DSS; Reporting/CM may already have it.
  6. Port-forward is the way in — For Minikube, treat it as the standard way to reach Cognos and the dashboard.


Summary checklist

  1. Start Minikube with 32GB RAM and 10 Core CPU. 
  2. Configure IBM registry access and create regcred in the Cognos namespace.
  3. Run PostgreSQL (content, audit, nc) and expose them to the cluster (Services/Endpoints or Docker network).
  4. (Optional) Set up SMB StorageClass and use it for PowerCube and DSS drivers PVC.
  5. Create DB secrets and run Helm with a Minikube + PostgreSQL override.
  6. Put the PostgreSQL JDBC jar in the DSS drivers volume and restart DSS if needed.
  7. Use kubectl port-forward to reach Cognos at http://localhost:9300/bi/.
  8. Create a data source in Cognos and run a report against an external database.
0 comments
12 views

Permalink