Db2

Db2

Connect with Db2, open source, and other data experts to gain value from your data, share insights, and solve problems.

 View Only

Deploying Db2 and Db2 Warehouse on native Kubernetes with containerized Db2: A Step-by-Step Guide

By Shiju xie posted 10 hours ago

  

1. Introduction

This guide provides a comprehensive walkthrough for setting up a Kubernetes cluster and deploying IBM Db2 Warehouse using the Universal Container and Operator (containerized Db2) using manual installation methods. Db2 can be deployed in a Kubernetes cluster as a containerized micro-service, or pod, managed by Kubernetes. It is designed for system engineers and administrators who need to build, manage, and validate a fully operational containerized Db2 environment within a controlled infrastructure.

The tutorial includes step-by-step instructions for preparing the environment, configuring Kubernetes, and deploying containerized Db2 components to ensure a reliable, scalable, and production-ready database platform. The following software versions are used for the demonstrations and procedures described in this document:

Kubernetes: v1.31

containerd.io: v1.7.27

IBM Db2 Engine: Version 12.1.3.0

Deployment performed using release build 7.4.0+20251011.015214.17553

2. Technology Stack

IBM containerized Db2

Red Hat Enterprise Linux 9.6

Kubernetes (manually installed on RHEL environment)

3. Kubernetes installation

3.1.   Overview

The Kubernetes community began removing dockershim as early as July 2020. This means that Kubernetes will no longer use Docker as the default underlying container tool. Docker and other container runtimes will be treated equally, without separate built-in support.

Since Kubernetes no longer supports Docker out of the box, one approach is of course to directly use Containerd, a container runtime that has been proven in production environments. Next, deploy Kubernetes using Containerd as the underlying container.

3.2.   Environmental plan

In this example, we will be setting up our environment as follows. The plan is described from four main aspects.

1.    Deployment type: A master-three-slave architecture will be used.

2.    Deployment method: We will follow the official kubeadm quick deployment method provided by Kubernetes.

3.    Deployment version: The environment will use Kubernetes version 1.31, containerd.io version 1.7.27.

4.    Host configuration: Four servers will be provisioned. IP addresses and hostnames are assigned sequentially, and the operating system version is Red Hat Enterprise Linux 9.6.

Note: This setup is provided as an example for demonstration purposes. Actual production environments may require adjustments based on scale, network topology, and organizational requirements.

Four hosts information is as follows:

Server

IP Address

OS Version

CPU Architecture

Function Description

steve71.fyre.ibm.com

10.11.97.228

Redhat 9.6

x86_64

Master node

steve72.fyre.ibm.com

10.11.103.164

Redhat 9.6

x86_64

Worker node1

steve73.fyre.ibm.com

10.11.107.231

Redhat 9.6

x86_64

Worker node2

steve74.fyre.ibm.com

10.11.108.15

Redhat 9.6

x86_64

Worker node3

                                                                       

The software versions are as follows:

Name

Version number

Kubernetes

1.31

Containerd.io

1.7.27

3.3.   Basic Configuration (on All hosts)

Tip: The following steps must be executed on all four hosts!

3.3.1. Set Hostnames and Update the Hosts File

1. Modify the Hostname

Run the following command on each node to set the appropriate hostname.

  •       Master node:
hostnamectl set-hostname steve71.fyre.ibm.com
  •       Worker node 1:
hostnamectl set-hostname steve72.fyre.ibm.com
  •       Workder node 2:
hostnamectl set-hostname steve73.fyre.ibm.com
  •       Worker node 3:
hostnamectl set-hostname steve74.fyre.ibm.com

Note: After changing the hostname, log out and back in (or run exec bash) for the change to take effect.

2. Add Hosts File (on All Nodes)

Append the following entries to the /etc/hosts file on each node (adjust the host names and IP address to match your environment).

cat >> /etc/hosts << EOF

10.11.97.228  steve71.fyre.ibm.com steve71
10.11.103.164 steve72.fyre.ibm.com steve72
10.11.107.231 steve73.fyre.ibm.com steve73
10.11.108.15  steve74.fyre.ibm.com steve74
EOF

After updating the file, verify that the entries have been added successfully.

cat /etc/hosts

3.3.2.                Disable SELinux and Firewalld

For simplicity, in this guide we choose to disable firewalld. In production environments, it is recommended to configure the firewall according to your security policies, ensuring that the necessary ports and services are allowed.

Note: If you prefer to keep the local firewall enabled, you can define appropriate firewall rules to open network paths between the nodes in the cluster, ensuring proper communication among Kubernetes cluster components.

1. Disable SELinux Edit the SELinux configuration file and set it to disabled

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

For convenience, SELinux is temporarily disabled in this guide. However, it is not strictly required to be disabled. Readers are strongly encouraged to consult the official Kubernetes documentation for guidance on configuring SELinux in pod and container security contexts: Kubernetes Security Contexts

2. Stop and Disable Firewalld

systemctl stop firewalld
systemctl disable firewalld

3. Reboot the System, after completing the modifications, it is recommended to reboot the host to apply the changes

reboot

After reboot, SELinux will be disabled and Firewalld will be inactive.

3.3.3.                Turn off Swap

Kubernetes requires swap to be disabled. If swap is not turned off, the kubelet will fail to start with the default configuration. Follow the steps below to disable swap.1. Turn Off Swap Immediately

swapoff -a

2. Disable Swap Permanently

Comment out the swap entry in /etc/fstab to prevent it from mounting on boot.

sed -i '/ swap /s/^/#/' /etc/fstab

3. Verify Swap is Disabled

Check that swap is no longer active.

free -m

After completing these steps, swap will be disabled both immediately and on system reboot.

3.4.   Uninstall Old Versions of Docker (on All hosts)

Before installing Docker or containerd, it’s recommended to remove any old or conflicting versions to ensure a clean setup.

Run the following command on all hosts.

sudo dnf -y remove \docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine \
containerd.io \
runc \
podman

This command removes all legacy Docker components and related container runtimes to avoid conflicts with the new installation. 

Note: These steps are taken from Docker’s official documentation. To ensure you always follow the most up-to-date procedures, please refer to the official guide rather than relying solely on the steps listed here.

3.5.   Install Required Dependencies (on All hosts)

Before installing Docker or Kubernetes components, ensure that the necessary system dependencies are installed. For reference, the following are the steps we used in our demonstration.

Run the following command on all hosts.

sudo dnf -y install dnf-plugins-core

The dnf-plugins-core package provides essential DNF plugins that are required for efficient management of repositories and packages. These plugins enable features such as:

-      Adding, enabling, or disabling repositories (dnf config-manager)

-      Extending package management capabilities for installation, updates, and dependency resolution

-      Supporting future commands in the installation process of Docker and Kubernetes components

Installing this package ensures that the hosts are properly prepared to handle repository configurations and package management tasks during the deployment.

Note: These steps are taken from Docker’s official documentation. To ensure you always follow the most up-to-date procedures, please refer to the official guide rather than relying solely on the steps listed here.

3.6.   Add the Official Repository (on All hosts)

To install the latest stable version of Docker, you need to add the official Docker repository to your system. Run the following command on all hosts.

sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo

This command enables the official Docker repository, ensuring that you can install and update Docker packages directly from the trusted source.

3.7.   Install Containerd (on All hosts)

Kubernetes requires a container runtime to manage containers. In this setup, we will use containerd as the runtime.

3.7.1.                Install Containerd

Run the following commands on all hosts.

sudo dnf install -y containerd.io-1.7.27-3.1.el9.x86_64
sudo systemctl enable --now containerd.service

The first command installs the specified version of containerd, and the second command enables and starts the containerd service immediately.

3.7.2.                Configure containerd

After installing containerd, you need to generate and modify its default configuration to ensure compatibility with Kubernetes.

1.     Generate the Default Configuration

sudo mkdir -p /etc/containerd/
containerd config default | sudo tee /etc/containerd/config.toml

2.    Modify the Configuration File

Edit the generated configuration file /etc/containerd/config.toml to enable SystemdCgroup by commenting out the disabled_plugins line

sudo sed -i 's/^disabled_plugins*/#disabled_plugins/g' /etc/containerd/config.toml

Purpose: Ensures that all required plugins in containerd are enabled so Kubernetes can interact correctly with the container runtime.

3.    Enable SystemdCgroup

Kubernetes requires SystemdCgroup = true when using systemd as the cgroup driver

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

Purpose: Aligns containerd with Kubernetes’ expected cgroup management; without this, pods may fail to start, resource limits may not be enforced, or kubelet may report errors.

4. Reload and Restart containerd

Apply the configuration changes by reloading systemd and restarting containerd

sudo systemctl daemon-reload
sudo systemctl restart containerd

After completing these steps, containerd is properly configured and ready for use with Kubernetes.

3.8.   Install kubeadm, kubelet and kubectl (on All hosts)

This section sets up the necessary kernel modules, system configurations, and installs the core Kubernetes components: kubeadm, kubelet and kubectl.

1.    Load Kernel Modules

Temporarily Load Common Kernel Modules.

sudo modprobe br_netfilter
sudo modprobe ip_vs

Temporarily Enable IPVS Modules (used by kube-proxy).

sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe overlay

Permanently Load Modules (persist after reboot)

cat <<EOF | sudo tee /etc/modules-load.d/kubernetes.confbr_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
overlay
EOF

2. Adjust Kernel Parameters for Container Networking

cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.confnet.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Apply the changes immediately.

sudo sysctl --system

Enable networking support for Kubernetes.

echo '1' | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables
echo '1' | sudo tee /proc/sys/net/ipv4/ip_forward
sudo dnf install -y iproute-tc

3. Set SELinux to Permissive Mode: Some Kubernetes network plugins require SELinux to be in permissive mode to allow container access to the host filesystem

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

4. Add Kubernetes Repository, Remove any existing Kubernetes packages to avoid version conflicts

sudo yum remove -y kubelet kubeadm kubectl

Add the official Kubernetes repository.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

5.    Install Kubernetes Components Install the specified version of kubelet, kubeadm, and kubectl

sudo yum install -y kubelet-1.31.13 kubeadm-1.31.13 kubectl-1.31.13 --disableexcludes=Kubernetes

6. Enable and Start kubelet

sudo systemctl enable --now kubelet

The Kubernetes components are now installed and the kubelet service is enabled. Your system is ready for cluster initialization or joining.

3.9.   Initialize the Master Node (execute only on the Master)

This step initializes the Kubernetes control plane (master node), sets up the cluster networking with Flannel, and verifies that the master node is ready.

1. Reset Existing Kubernetes Configuration If the node has been previously initialized or joined to a cluster, clean up any existing Kubernetes data

sudo kubeadm reset -f

Remove old configuration files.

sudo rm -rf /root/.kube/config
sudo rm -rf /etc/cni/net.d

2. Initialize the Kubernetes Control Plane Run using the following command to initialize the master node with your desired API server address and Pod network CIDR (Flannel uses 10.244.0.0/16 by default)

sudo kubeadm init --apiserver-advertise-address=10.11.97.228 --pod-network-cidr=10.244.0.0/16

Note: Replace 10.11.97.228 with the IP address of your master node. After successful initialization, kubeadm will display a kubeadm join command, save it for later to add worker nodes.

3. Configure kubectl for the root user: Set up kubectl so the root user can interact with the cluster

mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown root:root /root/.kube/config

4.    Deploy the Flannel CNI Plugin

Install the Flannel network plugin for pod networking.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Restart the container runtime:

sudo systemctl restart containerd

5.    Verify Cluster Status

Check that all nodes are registered and the master node is in the Ready state.

kubectl get nodes

You should see the master node with the Ready status once the Flannel network is fully initialized.

At this point:

-    The Kubernetes control plane is initialized.

-    The Flannel CNI network plugin is deployed.

-    The master node is ready to manage worker nodes.

3.10.                Add Node to the Cluster (Master & Worker)

This section describes how to generate a join command on the master node and use it to add worker nodes to the Kubernetes cluster. It also includes labeling worker nodes for better role identification.

1.    Generate the Join Command (on the Master Node)

Run the following command on the master node to generate the join command for worker nodes.

kubeadm token create --print-join-command

Copy the entire output, it should look similar to the example below and will be used on each worker node. Example output:

kubeadm join 10.11.97.228:6443 --token coba3o.odoj6xvle754m196 \
--discovery-token-ca-cert-hash sha256:50f972f7bea0c86f3431fa328e95d09455034b1128a3bb0f724a5a0b945b896c

Replace the IP address, token, and hash values with those generated by your own master node.

2.    Join Worker Nodes to the Cluster(on the Worker Node)

On each worker node, run the join command obtained from the master node.

kubeadm join 10.11.97.228:6443 --token coba3o.odoj6xvle754m196 \
--discovery-token-ca-cert-hash sha256:50f972f7bea0c86f3431fa328e95d09455034b1128a3bb0f724a5a0b945b896c

Repeat this command on all worker nodes (e.g., worker1, worker2, worker3).

3.    Verify Node Registration (on the Master Node)

Once all worker nodes have joined, check their status from the master node.

kubectl get nodes

You should see all nodes listed, with their STATUS changing to Ready after a short period.

4.    Label Worker Nodes (on the Master Node)

Assign the role label worker to each worker node for better organization.

kubectl label --overwrite node steve72.fyre.ibm.com kubernetes.io/role=worker
kubectl label --overwrite node steve73.fyre.ibm.com kubernetes.io/role=worker
kubectl label --overwrite node steve74.fyre.ibm.com kubernetes.io/role=worker

Labeling nodes helps identify their purpose and can be useful when scheduling pods or applying role-based configurations.

After completing these steps:

-      All worker nodes are successfully joined to the Kubernetes cluster.

-      Each node has been labeled appropriately.

-      The cluster is now fully functional and ready for workload deployment.

3.11.                Verify Kubernetes Cluster Status on Master Node

After all nodes have joined the cluster and the network plugin has been deployed, verify that your Kubernetes cluster is running correctly by checking the status of nodes, pods, services, and configuration maps.

3.11.1.            Check cluster Node Status

List all nodes in the cluster with detailed information.

kubectl get nodes -o wide

The STATUS column should show Ready for all nodes (master and workers).

3.11.2.            Check the Status of All Pods in the Cluster

Display all running pods across all namespaces.

kubectl get po -o wide -A

This command helps ensure that system pods (such as kube-system) and network plugin pods (e.g., Flannel) are running correctly.

3.11.3.            Check the Status of All Services in the Cluster

List all services across all namespaces.

kubectl get svc -A

This shows cluster-level services such as the Kubernetes API server, DNS, and others.

3.11.4.            Check the Status of All ConfigMaps in the Cluster

View all ConfigMaps in every namespace.

kubectl get configmaps -A

ConfigMaps store configuration data used by various applications and components within the cluster.

By verifying these components, you can confirm that:

-      All nodes are active and ready.

-      System and network pods are running as expected.

-      Core services and ConfigMaps are properly configured.

-      Your Kubernetes cluster is now fully operational and ready for workload deployment.

4. Deploy Db2

The containerized Db2 deployment was performed using release build 7.4.0+20251011.015214.17553  of version 12.1.3.0. The installation followed the standard containerized Db2 deployment procedure, ensuring that all required configurations and resources were properly initialized. After deployment, the containerized Db2 instance was successfully launched and is now running and accessible within the configured Kubernetes environment. For reference, see the official IBM documentation for Db2 and Db2 Warehouse deployments on Red Hat OpenShift and Kubernetes:
IBM Db2 and Db2 Warehouse deployments on RHEL, OpenShift, and Kubernetes

4.1.   Install OLM(Operator Lifecycle Manager)

Operator Lifecycle Manager (OLM) is an open source toolkit to manage Kubernetes Operators. OLM extends Kubernetes to provide a declarative method for installing, managing, and updating operators within a cluster environment. You must install OLM to run the Db2 Operator.

curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.0/install.sh | bash -s v0.20.0

Note: Version v0.20.0 was selected because it is a stable and validated release that is compatible with the Kubernetes version used in this deployment. You may refer to the official OLM releases for the latest versions. Alternatively, refer to the official IBM documentation: Installing Operator Lifecycle Manager (OLM) to deploy the Db2 Operator.

4.2.   Verify OLM Pod Status

Check that the OLM pods are running properly.

kubectl get pods -n olm

4.3.   Deploy the Db2 operator

We’re now ready to deploy the Db2 Operator using Operator Lifecycle Manager (OLM).
In this step, we’ll:

-      Create a dedicated namespace for db2u

-      Deploy the Db2 Operator catalog source in the OLM namespace

-      Deploy the Db2 Operator itself in the db2u namespace

This setup ensures that all Db2 Operator components are properly isolated and managed within their respective namespaces. This section sets up the Db2 Operator using OLM by creating a dedicated namespace, catalog source, operator group, and subscription.

The table below shows the db2 operator used and its associated version. We'll use this information to install the operator.

Db2 Operator version

Db2 Operator upgrade channel

Db2 Engine version

Container Application Software for Enterprises (CASE) version

sha

120103.0.0

v120103.0

s12.1.3.0

7.4.0+20251011.015214.17553

icr.io/cpopen/ibm-db2uoperator-

catalog@sha256:619915e9ceb695bfc

84f91b87c8c1791bcec2c2db576fe84c

d2ff65e513adda4

cat << EOF | kubectl create -f ----
apiVersion: v1
kind: Namespace
metadata:
 name: db2u
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
 name: ibm-db2uoperator-catalog
 namespace: olm
spec:
 displayName: IBM Db2U Catalog
 image: icr.io/cpopen/ibm-db2uoperator-catalog@sha256:619915e9ceb695bfc84f91b87c8c1791bcec2c2db576fe84cd2ff65e513adda4
 publisher: IBM
 sourceType: grpc
 updateStrategy:
registryPoll:
interval: 45m
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
 name: common-service-operator-group
 namespace: db2u
spec:
 targetNamespaces:
 - db2u
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
 name: ibm-db2uoperator-catalog-subscription
 namespace: db2u
 generation: 1
spec:
 channel: v120103.0
 installPlanApproval: Automatic
 name: db2u-operator
 source: ibm-db2uoperator-catalog  # Must match the catalogsource .metadata.name
 sourceNamespace: olm  # Must match the catalogsource .metadata.namespace
 startingCSV: db2u-operator.v120103.0.0
EOF

4.4.   Verify OLM Namespace Pods

Check that the OLM pods are running properly.

kubectl get pods -n olm

4.5.   Verify Db2U Namespace Pods

Ensure the Db2 operator pods are being deployed successfully.

kubectl get pods -n db2u

4.6.   Check Operator CSV Status

Confirm that the Db2 Operator ClusterServiceVersion (CSV) is successfully deployed.

kubectl get csv -n db2u

4.7.   Check Operator Deployment Status

Verify that the Db2 Operator deployment is active and available.

kubectl get deploy -n db2u

4.8.   Install NFS

In this demonstration, Db2 is deployed using NFS (Network File System) as the backend storage provider. NFS offers ReadWriteMany (RWX) access capability, enabling the head Pod, member Pods, and ancillary utility Pods to concurrently access shared persistent volumes. For production deployments, an appropriate storage solution should be selected based on performance, reliability, and operational requirements.

4.8.1.                Install NFS Utilities

On the Master Node.

yum install -y nfs-utils

On each Worker Node.

yum install -y nfs-utils

4.8.2.                Config NFS on the Master Node

Start the NFS server .

systemctl start nfs-server

Create and export a shared directory.

mkdir /data
chmod 777 /data

For clarity, note that chmod 777 /data is used for convenience to grant world-wide read, write, and execute permissions on the NFS mount. Because this permission level is broad, users should adjust it according to the security policies of their own organization.

echo "/data *  rw,sync,no_root_squash,insecure)" >> /etc/exports
exportfs -rav

Note: For security reasons, it is generally recommended to replace the wildcard (*) with a specific list of hostnames or IP addresses that are allowed to access the share. This limits access to trusted systems only and helps prevent unauthorized connections.

4.8.3.                Create Service Account and RBAC for NFS Client Provisioner

The nfs-client-provisioner ServiceAccount provides the necessary identity and permissions for the NFS Subdir External Provisioner to interact with the Kubernetes API.
It allows the provisioner to dynamically create and manage PersistentVolumes (PVs) and subdirectories on the NFS server in response to PersistentVolumeClaims (PVCs).

On the Master node.

cat << EOF | kubectl create -f - -n default
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF

4.8.4.                Deploy the NFS Client Provisioner

Deploy the NFS dynamic provisioner to automatically handle Persistent Volume provisioning.

On the Master node.

cat << EOF | kubectl create -f - -n defaultcat << EOF | kubectl create -f - -n default
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: cp4d/nfs-client                    
            - name: NFS_SERVER
              value: <your_master_node_ip>
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: <your_master_node_ip>
            path: /data
EOF

4.8.5.                Create StorageClass on the Master Node

Define a StorageClass that uses the NFS provisioner to dynamically create persistent volumes.

On the Master node.

cat << 'EOF' | kubectl create -f - -n default
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: cp4d/nfs-client     
parameters:
  pathPattern: "${.PVC.namespace}-${.PVC.name}/${.PVC.annotations.nfs.io/storage-path}"
  onDelete: delete
  archiveOnDelete: "false"
EOF

4.8.6.                Verify NFS StorageClass on the Master Node

Confirm the NFS storage class has been successfully created and registered.

kubectl get sc | grep nfs-client

This section describes how to install and configure NFS as a dynamic storage backend for Kubernetes. The NFS client provisioner will automatically create and manage Persistent Volumes (PVs) for containerized Db2 components.

Note on Non-Db2 Components:

These steps are provided as an example demonstration only and may involve components that are outside the scope of containerized Db2 deployment support. For complete, up-to-date, and authoritative guidance, always refer to the official documentation or repositories of the respective component:

NFS Exports Manual Page (man 5 exports)

NFS Setup Guide on RedHat

Disclaimer:

These instructions are for demonstration purposes only. Ownership, maintenance, and support of non-Db2 components remain the responsibility of the respective product teams or community projects.

Since the Db2 deployment in this demo uses an nfs-client StorageClass, NFS must be installed and configured on the cluster nodes. NFS (Network File System) allows all nodes in the cluster to access shared storage over the network, which is required for Db2 persistent volumes.

4.9.   Deploy containerized Db2

In this step, you will deploy the IBM Db2 instance into the db2u namespace.

The configuration defines storage, resource limits, authentication settings.

4.9.1.                Deploy containerized Db2 Instance on the Master Node

Once the Db2 Operator is installed, the Db2uInstance custom resource (CR) provides the interface required to deploy Db2 or Db2 Warehouse. This CR is supported by a Red Hat OpenShift CR definition. For deploying Db2 via the Db2uInstance custom resource (CR), please refer to IBM’s official documentation on creating and managing Db2 instances using the operator.

Run the following command on the master node to create the Db2uInstance custom resource.

cat << EOF | kubectl create -f - -n db2u
apiVersion: db2u.databases.ibm.com/v1
kind: Db2uInstance
metadata:
  name: entrepot
spec:
  version: s12.1.3.0
  nodes: 2
  podTemplate:
    db2u:
      resource:
        db2u:
          limits:
            cpu: 4
            memory: 8Gi
  environment:
    dbType: db2wh
    databases:
      - name: BLUDB
    partitionConfig:
      total: 4
      volumePerPartition: true
    authentication:
      ldap:
        enabled: false
  license:
    accept: true
  storage:
    - name: meta
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: nfs-client
      type: create
    - name: archivelogs
      type: create
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 25Gi
        storageClassName: nfs-client
    - name: data
      type: template
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: nfs-client
    - name: tempts
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: nfs-client
      type: template
    - name: etcd
      type: template
      spec:
        storageClassName: nfs-client
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
    - name: blulocal 
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 70Gi
        storageClassName: nfs-client
      type: template
EOF

4.9.2.                Verify Db2 Deployment Status on the Master Node

After deploying containerized Db2, verify that all pods are created and running correctly in the db2u namespace

kubectl get pods -n db2u

Check that the containerized Db2 instance itself is in the Ready state.

kubectl get db2uinstance -n db2u

Note: Ensure that all pods show a STATUS of Running and that the containerized Db2 instance is Ready. If any pods are not yet running, wait a few moments and re-run the commands. You can also inspect pod logs for troubleshooting.

kubectl logs <pod-name> -n db2u

When all pods are running and the containerized Db2 instance is Ready, it is fully operational within your Kubernetes cluster.

5. Conclusion

By completing this deployment, we demonstrated how IBM containerized Db2 can run seamlessly on a native Kubernetes cluster, providing a modern, scalable foundation for enterprise data workloads. This setup highlights the flexibility of Kubernetes for database orchestration and the robustness of Db2 in containerized environments. Future enhancements could include performance tuning, integration with monitoring tools, and exploring high availability configurations to further strengthen the deployment. With these foundations in place, teams can confidently move toward production-ready, cloud-native data management with IBM containerized Db2.

The procedure and steps outlined in this article are provided for demonstration and illustration purposes only. For the most up-to-date and complete instructions, please refer to the official IBM Db2 documentation: Db2 and Db2 Warehouse deployments on RHEL, OpenShift, and Kubernetes

6. About the Authors

Steve Shiju Xie is a Software Developer on the containerized Db2 team at the Ireland Lab. He focuses on deploying and validating containerized Db2 on Kubernetes. He can be reached at steve.shiju.xie@ibm.com

Tony Sinclair is the manager of the containerized Db2 team. He's been at IBM for over 13 years and has primarily worked in development infrastructure, automation, and release engineering. Part of the pioneering team that first put Db2 into a container, he's been involved in every iteration of containerized Db2 since inception. Tony can be reached at tonyaps@ca.ibm.com

Rubing Wang is a Software Developer on the containerized Db2 team at the Ireland Lab. She is currently focused on deploying containerized Db2 across multiple architectures, including s390, PPC, and x86. She can be reached at rubingw@ibm.com

0 comments
12 views

Permalink