Db2

Db2

Connect with Db2, Informix, Netezza, open source, and other data experts to gain value from your data, share insights, and solve problems.

 View Only

A Case Study of the Ability to Bring Your Own Image (BYOI) to Db2U - TSM

By Labi Adekoya posted 14 hours ago

  

Executive Summary 

Some of our customers using Db2/Db2 Warehouse Containerized offerings (on K8s/OpenShift), have expressed interest in integrating custom-built third-party applications into the Db2 engine to support data-related operations such as backup-and-restore, archiving, custom UDFs, so on and so forth. This study highlights our efforts to enable customers and enterprise organizations to deploy custom images within the Db2 engine environment. Using IBM Storage Protect (formerly Spectrum Protect, a.k.a  Tivoli Storage Manager – TSM) as a reference implementation, we demonstrated how a custom-built TSM image can be successfully integrated into Db2U. 

To begin, we created a custom Db2 Warehouse (Db2Wh) image with the TSM client application layered on top and pushed it to the default image registry within an OpenShift cluster. We then defined an external-release ConfigMap referencing this custom image. This ConfigMap allows the Db2U operator to override the default Db2Wh image, ensuring that the Db2 engine is deployed using the specified custom image. 

To validate the integration, we configured the TSM client, registered it with a dedicated TSM server, and performed a basic data backup operation. Additionally, we provided detailed configuration steps and guidance on how the TSM client was brought into the Db2 environment and how communication was established with the TSM server, completing an end-to-end validation of the solution. 


Why the Need for the Ability to Build Custom Image on Db2 

As the Db2 engine is increasingly being used as a data source for third-party applications, several customers have requested the ability to perform backup-and-restore using tools such as Tivoli Storage Manager (TSM) and Dell EMC NetWorker Module for Databases and Applications (NMDA). Consequently, enhancements to the Db2 operator are required to enable seamless data backup capabilities in a containerized environment. To facilitate customer use cases where applications are built on top of Db2 in a containerized setup (also knowns as Db2 Universal Container – Db2U), we explored several potential solutions, including: 

  • Leveraging an external-release ConfigMap (CM) to override the default Db2 image. 

  • Modifying the Db2U Custom Resource (CR) specification template to allow image overrides at the instance level. 

After a thorough evaluation of these options, supported by proof-of-concept implementations, the team selected the Bring-Your-Own-Image (BYOI) approach using an external-release ConfigMap as the solution.  

This study presents a design overview of the BYOI approach, including its limitations, a practical use-case example, known issues, and implications of the design. 


Design Overview of Building Custom Image on Db2 

As mentioned in the previous section, in this study, we explored two possible design options for BYOI into Db2U namely: 

  1. The use of an external-release ConfigMap – This involves creating a Kubernetes (K8s)  ConfigMap (CM) resource that defines the image to override for which version. The Db2 operator then replaces the matching image in the default-release CM with the override image in the external-release CM for a specific version. Here’s a schematic representation of the external-release ConfigMap: 

    apiVersion: v1 
    kind: ConfigMap 
    metadata: 
      name: db2u-release-external 
    data: 
      json: |- 
        { 
          "databases": { 
            "db2u": { 
              "12.1.3.0": { 
                "images": { 
                  "db2u": "<local OCP registry >:<custom image tag>" 
                } 
              } 
            } 
          } 
          "universals": null 
        } 


  2. The modification of the pod template of the Db2U CR to take an image property to allow an image override. In the current design, the Db2U CR does not expose this Db2 image; it’s transparently injected by the Db2U operator using other internal custom defined objects. This design option requires modifying the Db2U CR to include an image property that allows custom image injection as shown in the example configuration below: 

    apiVersion: db2u.databases.ibm.com/v1 
    kind: Db2uInstance 
    metadata: 
      name: db2wh-tsm 
    spec: 
      version: s12.1.3.0 
      nodes: 2 
    .... 
      podTemplate: 
        db2u: 
          resource: 
            db2u: 
              limits: 
                cpu: 12 
                memory: 24Gi 
          image: "db2u": "image-registry.openshift-image-registry.svc:5000/openshift/db2u:12.1.3.0-tsm" 
      environment: 
    ... 

Design Considerations 

To determine which of the solutions to implement, we evaluated the pros and cons of the proposed solutions.

Pros 

  1. In both design options, the Db2U operator has awareness of the image override which means that any changes made to the operator specification will be retained, including version changes. 

  1. Option 1 makes multi-tenancy possible by allowing image override in specific namespaces without applying changes to all deployments in different namespaces overriding the single-operator-per cluster model for watching all namespaces.  

  1. Multi-tenancy is also possible with option 2 simply by overriding the image in each of the Db2U instances. However, override is not possible at a global level. 

Cons

  1. In both options, customers will need to be responsible for the ongoing maintenance of the custom image including day2 operations, such as version updates. For Db2 upgrade, for example, customers will need to build and test their custom image with Db2U operator and ensure that there are no regressions stemming from Db2 version change. 

These design considerations fuelled the design direction that led to the choice of BYOI used in this study, and that is the focus of the next section. 

Design Direction 

Here we discuss how we settled on the design option discussed in this study. The goal here is to provide context on the thought process that influenced the design options 

  1. The design option of modifying the Db2uinstance CR schema (option 2 mentioned above) is a more permanent change in comparison to the use of an external-release ConfigMap which is external to the CR schema. Therefore, the inflexibility of option 2 makes it more tedious for future releases wherein we adapt a more generic solution for mutating Db2U in the future. In addition, the original design plan was to employ a solution that makes updates completely external without the need to re-architect the core Db2uintance CR. This design philosophy makes the use of an external-release CM favorable and reduces the work that would be required for future releases. 
     

  1. Version upgrades with an external-release CM might be relatively simpler since customers can easily download the Container Application Software for Enterprises (CASE) bundle, rebuild the new custom Db2U image and update the external-release CM before even upgrading the operator to the new CASE bundle. With this approach, the only change that happens on the operand is the version change. However, updates with the DB2uinstance CR schema modification require both version and image patching.  
     

  1. Multi-tenancy deployment can be further enhanced for the CM approach by using a reference attribute that informs the CR of the external-release CM to use.  

Based on the design context enumerated above, the use of external-release ConfigMap was chosen as the solution for BYOI.


Case Study: Building TSM on Db2 

Here we discuss an example use-case of how we use the external-release ConfigMap to build custom image to bundle TSM client on-top-of published Db2 image  

Introduction to TSM 

TSM is a data protection platform that delivers centralized, automated backup and recovery across diverse environments including virtual, physical, and cloud. It serves as a unified point of control and administration for these operations. Among its key capabilities, TSM offers multiple backup strategies – such as full, incremental, and differential - along with robust recovery options for restoring data. With respect to this study, the two key components of TSM necessary for its core functionality are: 
 

  1. TSM Server – the server manages the storage environment, policy definitions and backup operations to the underlying storage devices 

  1. TSM Client – the client is the software installed on individual machines or hosts (virtual or physical) that allows the backing up of data to the TSM server 

Building TSM client on-top-of Db2 Image 

To build TSM on Db2, TSM client must be deployed to access the Db2 engine (which acts as the data source), and the client must also be configured to communicate with the specified TSM server. In other words, building TSM on Db2 is a two-staged process namely: 

  • TSM Client Deployment 

  • TSM Server Registration 

TSM Client Deployment 

Here we discuss how we deployed TSM Client with the use of an external-release ConfigMap to facilitate BYOI. It’s a multi-step process including the following: 

  • Define the custom image with a Dockerfile 

  • Build the custom TSM image with Db2 layer 

  • Create an external-release CM with the newly created custom image 

  • Create configuration details including the dsm.sys (data server manager client system) and dsm.opt (data server manager client option) as a ConfigMap 

  • Deploy Db2 Warehouse MPP Db2uInstance CR  

  • Validate the deployment 

  • Post-deployment TSM data backup setup 

Define the custom image using a Dockerfile 

To create a custom Dockerfile, here are the steps we used in this case study: 

  1. Create a dedicated directory for tsm on a node that has access to the OpenShift cluster where the Db2 operator is deployed and change directory to the folder. For example, in the root directory, create a directory named tsm: 
     

    mkdir tsm; cd tsm 

     

  1. Download the tar file for the specific TSM version to be packaged and copy it to the created tsm folder in step 1. For example, for version 8.1.26 on both x86 and ppc64le systems, you should have the following tar ball in the tsm folder: 
     

    x86 
    >tsm ls -la  
    total 862400  
    drwxr-xr-x 2 root root 46 Apr 15 13:12 .  
    dr-xr-x---. 23 root root 4096 Apr 15 13:10 ..  
    -rw-r--r-- 1 root root 883085589 Apr 14 10:10 SP_CLIENT_8.1.26_LIN86_ML.tar.gz 
     
    ppc64le 
    >tsm ls -la  
    total 643284  
    drwxr-xr-x 2 root root 50 Apr 10 08:51 .  
    dr-xr-x---. 23 root root 4096 Apr 10 08:51 ..  
    -rw-r--r-- 1 root root 658718514 Apr 4 12:44 SP_CLIENT_8.1.26_LINPOW_LE_ML.tar.gz

In the same tsm folder, create a Dockerfile with the following content: 
 

x86

FROM icr.io/db2u/db2u.db2wh@sha256:da418a499ea6a827f2e133ad1b671df8e6d34c3f5db8d21daf4121ac781b0ec8 

# Switch to root user 
USER root 
 
# Copy tsm tar ball to tmp dir, extract & install packages 
RUN mkdir /tmp/tsm-client  
COPY LIN86_ML.tar.gz /tmp/tsm-client  
RUN tar -xvf /tmp/tsm-client/LIN86_ML.tar.gz -C /tmp/tsm-client \  
         && pushd /tmp/tsm-client/TSMCLI_LNX/tsmcli/linux86 \ 
         && rpm --import GSKit.pub4.pgp \ 
         && rpm --checksig gskcrypt64.rpm \ 
         && rpm -U gskcrypt64.rpm gskssl64*.rpm \ 
         && rpm -ivh TIVsm-API64*.rpm \ 
         && popd \ 
         && rm -rf /tmp/tsm-client 

# Create empty dsm.sys & dsm.opt for the TSM client 
RUN touch /opt/tivoli/tsm/client/api/bin64/dsm.sys && \ 
         touch /opt/tivoli/tsm/client/api/bin64/dsm.opt 
 
# Create script to symlink tsm configuration files 
RUN cat <<'EOF' > /usr/local/bin/configmap_symlink.sh  
#!/bin/bash  
echo "Waiting for ConfigMap mount at /mnt/blumeta0/configmap/external/"  
while [ ! -d "/mnt/blumeta0/configmap/external/" ]; do  
    sleep 1  
done 

echo "Searching for dsm.sys in /mnt/blumeta0/configmap/external/" DSM_SYS_CONFIGMAP_PATH=$(find /mnt/blumeta0/configmap/external/ -mindepth 1 -maxdepth 1 -type d | head -n 1)/dsm.sys  
if [ -e "$DSM_SYS_CONFIGMAP_PATH" ]; then  
    echo "Resolved ConfigMap Path: $DSM_SYS_CONFIGMAP_PATH"  
    ln -sf "$DSM_SYS_CONFIGMAP_PATH" /opt/tivoli/tsm/client/api/bin64/dsm.sys  
else  
    echo "Warning: dsm.sys not found in any ConfigMap directory under /mnt/blumeta0/configmap/external/"  
fi 

echo "Searching for dsm.opt in /mnt/blumeta0/configmap/external/" DSM_OPT_CONFIGMAP_PATH=$(find /mnt/blumeta0/configmap/external/ -mindepth 1 -maxdepth 1 -type d | head -n 1)/dsm.opt  
if [ -e "$DSM_OPT_CONFIGMAP_PATH" ]; then  
    echo "Resolved ConfigMap Path: $DSM_OPT_CONFIGMAP_PATH"  
    ln -sf "$DSM_OPT_CONFIGMAP_PATH" /opt/tivoli/tsm/client/api/bin64/dsm.opt  
else  
    echo "Warning: dsm.opt not found in any ConfigMap directory under /mnt/blumeta0/configmap/external/"  
fi  
EOF 

RUN chmod +x /usr/local/bin/configmap_symlink.sh 

WORKDIR /db2u 

# Update entrypoint script with the tsm symlink link script 
RUN sed -i '/source /etc/profile/a \ 
/usr/local/bin/configmap_symlink.sh' db2u_root_entrypoint.sh 
 
WORKDIR /  
USER db2uadm 

ppc64le 

FROM icr.io/db2u/db2u.db2wh@sha256:b398aea1c2a64093c557132e92e35d92f4cc9eac5d8ef484cca20c7fe5c4f5ff 

# Switch to root user 
USER root 

# Copy tsm tar ball to tmp dir, extract & install packages 
RUN mkdir /tmp/tsm-client  
COPY LINPOW_LE_ML.tar.gz /tmp/tsm-client  
RUN tar -xvf /tmp/tsm-client/LINPOW_LE_ML.tar.gz -C /tmp/tsm-client \ 
         && pushd /tmp/tsm-client/TSMCLI_LNXPLE/tsmcli/linuxPLE \ 
         && rpm --import GSKit.pub4.pgp \ 
         && rpm --checksig gskcrypt64.rpm \ 
         && rpm --import RPM-GPG-KEY-ibmpkg \ 
         && rpm -U gskcrypt64.rpm gskssl64*.rpm \ 
         && rpm -ivh TIVsm-API64*.rpm \ 
         && popd \ 
         && rm -rf /tmp/tsm-client 

# Create empty dsm.sys & dsm.opt for the TSM client 
RUN touch /opt/tivoli/tsm/client/api/bin64/dsm.sys && \ 
         touch /opt/tivoli/tsm/client/api/bin64/dsm.opt 

# Create script to symlink tsm configuration files 
RUN cat <<'EOF' > /usr/local/bin/configmap_symlink.sh  
#!/bin/bash  
echo "Waiting for ConfigMap mount at /mnt/blumeta0/configmap/external/"  
while [ ! -d "/mnt/blumeta0/configmap/external/" ]; do  
    sleep 1  
Done 

echo "Searching for dsm.sys in /mnt/blumeta0/configmap/external/" DSM_SYS_CONFIGMAP_PATH=$(find /mnt/blumeta0/configmap/external/ -mindepth 1 -maxdepth 1 -type d | head -n 1)/dsm.sys  
if [ -e "$DSM_SYS_CONFIGMAP_PATH" ]; then  
    echo "Resolved ConfigMap Path: $DSM_SYS_CONFIGMAP_PATH"  
    ln -sf "$DSM_SYS_CONFIGMAP_PATH" /opt/tivoli/tsm/client/api/bin64/dsm.sys  
else 
    echo "Warning: dsm.sys not found in any ConfigMap directory under /mnt/blumeta0/configmap/external/"  

fii 

echo "Searching for dsm.opt in /mnt/blumeta0/configmap/external/" DSM_OPT_CONFIGMAP_PATH=$(find /mnt/blumeta0/configmap/external/ -mindepth 1 -maxdepth 1 -type d | head -n 1)/dsm.opt  
if [ -e "$DSM_OPT_CONFIGMAP_PATH" ]; then 
    echo "Resolved ConfigMap Path: $DSM_OPT_CONFIGMAP_PATH"  
    ln -sf "$DSM_OPT_CONFIGMAP_PATH" /opt/tivoli/tsm/client/api/bin64/dsm.opt  
else  
    echo "Warning: dsm.opt not found in any ConfigMap directory under /mnt/blumeta0/configmap/external/"  
fi  
EOF 

RUN chmod +x /usr/local/bin/configmap_symlink.sh  

WORKDIR /db2u 

# Update entrypoint script with the tsm symlink link script 
RUN sed -i '/source /etc/profile/a  
/usr/local/bin/configmap_symlink.sh' db2u_root_entrypoint.sh 

WORKDIR /  
USER db2uadm 

Now that we have defined the Dockerfile required for customizing, we can proceed to build the image. Building the image is the focus of the next sub-section. 

Build custom TSM image with Db2 layer 

To build the custom image in an OpenShift cluster where the Db2 operator is deployed, the following steps were executed: 

  1. First, we obtained the default internal image registry by running the following command: 
     

    oc get routes -n openshift-image-registry 

     

  1. We then authenticate with the default internal image registry via the registry route as follows: 
     

    podman login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps.bluesky.cp.fyre.ibm.com --tls-verify=false 

     
     
    Note: default-route-openshift-image-registry.apps.bluesky.cp.fyre.ibm.com is the route object for the internal image registry in the OpenShift cluster used in this study. The image registry will be different if a different or external image registry is being used. 

 

  1. Build the image. For example, while in the directory where the Dockerfile for the image is stored, we built the image accordingly: 
     

    podman build -t db2u:12.1.3.0-tsmv1 . --format docker


  1. Tag the image. For example: 
     

    podman tag localhost/db2u:12.1.3.0-tsmv1 default-route-openshift-image-registry.apps.bluesky.cp.fyre.ibm.com/tsm/db2u:12.1.3.0-tsmv1 

     

  1. Push the image to the registry 

    podman push default-route-openshift-image-registry.apps.bluesky.cp.fyre.ibm.com/tsm/db2u:12.1.3.0-tsmv1 --tls-verify=false 

Create an external-release ConfigMap 

Create the db2u-release-external CM in the same namespace as the Db2 operator specifying the built image. For example: 

apiVersion: v1 
data: 
  json: | 
    { 
       "databases": { 
         "db2u": { 
           "s12.1.3.0": { 
             "images": { 
               "db2wh": "image-registry.openshift-image-registry.svc:5000/tsm/db2u:12.1.3.0-tsmv1" 
            } 
          } 
        } 
      } 
    } 
kind: ConfigMap 
metadata: 
  name: db2u-release-external 
  namespace: tsm 

And then create a ConfigMap with the content above. 

A few things to note here: 

  1. The name of the ConfigMap MUST BE db2u-release-external. Otherwise, the image override will not happen. 

  1. Note that in the CM, the internal service registry is used and not the route object which is usually of the form image-registry.openshift-image-registry.svc:5000/<namespace-db2-is-deployed>/<name-of-the-custom-built-image-with-tag> 

Create configuration details including the dsm.sys and dsm.opt as a ConfigMap 

For TSM client to be able to communicate with the TSM server, the client must be configured with the server details as well as the details of the node on which the client is deployed. For example, this study used the following dsm.sys and dsm.opt files for a 3-node Db2 Warehouse MPP setup: 

dsm.opt

SErvername  db2inst1.tsm04

dsm.sys

SErvername tsm04 
TCPServeraddress tsm04.raleigh.ibm.com 
COMMMethod TCPip 
PASSWORDACCESS generate 
TCPPort 1500 
SSL no 
resourceutilization 6 
NODENAME c-db2wh-mpp-demo-db2u-0 

SErvername db2inst1.tsm04 
COMMmethod TCPip 
TCPPort 1500 
SSL no 
TCPServeraddress tsm04.raleigh.ibm.com 
PASSWORDACCESS generate 
PASSWORDDIR /mnt/blumeta0/tsm/api/c-db2wh-mpp-demo-db2u-0/cred_store 
errorlogname /mnt/blumeta0/db2/log/db2-dsmerror.log 
resourceutilization 6 
NODENAME c-db2wh-mpp-demo-db2u-0 

Note: 

  1. NODENAME is the hostname of the db2 pod 

  1. SErvername (in dsm.opt file and the second occurrence in the dsm.sys file) is of the structure: db2inst1.<tsm_server_name> 

Using the configuration files above stored in the same location (e.g. /root/tsm) as the Dockerfile, we stored the files in a ConfigMap object as follows: 

oc create cm db2u-tsm-config --from-file=dsm.sys=./dsm.sys --from-file=dsm.opt=./dsm.opt 

Deploy Db2 Warehouse MPP Db2uInstance CR 

Following the creation of the TSM configuration CM db2u-tsm-config, we deployed Db2 WH MPP Db2uInstance Custom Resource (CR). We used the following example CR configuration for the deployment: 
 

apiVersion: db2u.databases.ibm.com/v1 
kind: Db2uInstance 
metadata: 
  name: db2wh-mpp-demo 
  namespace: tsm 
spec: 
  podTemplate: 
    db2u: 
      resource: 
        db2u: 
          limits: 
            cpu: 8 
            memory: 24Gi 
  version: s12.1.3.0 
  nodes: 3 
  addOns: 
    opendataformats: 
      enabled: true 
  environment: 
    dbType: db2wh 
    databases: 
      - name: BLUDB 
    partitionConfig: 
      total: 6 
      volumePerPartition: true 
    authentication: 
      ldap: 
        enabled: true 
  license: 
    accept: true 
  volumeSources: 
  - visibility: 
    - db2u 
    volumeSource: 
      configMap: 
        name: db2u-tsm-config 
  storage: 
    - name: meta 
      type: create 
      spec: 
        accessModes: 
          - ReadWriteMany 
        resources: 
          requests: 
            storage: 10Gi 
        storageClassName: managed-nfs-storage 
    - name: backup 
      type: create 
      spec: 
        accessModes: 
          - ReadWriteMany 
        resources: 
          requests: 
            storage: 5Gi 
        storageClassName: managed-nfs-storage 
    - name: archivelogs 
      type: create 
      spec: 
        accessModes: 
          - ReadWriteMany 
        resources: 
          requests: 
            storage: 10Gi 
        storageClassName: managed-nfs-storage 
    - name: data 
      type: template 
      spec: 
        accessModes: 
          - ReadWriteOnce 
        resources: 
          requests: 
            storage: 20Gi 
        storageClassName: managed-nfs-storage 
    - name: tempts 
      type: template 
      spec: 
        accessModes: 
          - ReadWriteOnce 
        resources: 
          requests: 
            storage: 5Gi 
        storageClassName: managed-nfs-storage 

One thing to note in the CR specification above is that the TSM configuration that was previously created as the db2u-tsm-config ConfigMap is mounted as a volumeSource which is then mounted in the path /mnt/blumeta0/configmap/external in the container. This allows the configuration files to be symlinked to the default location that the TSM client expects the files to be located. The script to check for the existence of the CM and the symbolic linking of the configuration files is included in the entrypoint script via the Dockerfile described previously. 

Validate the Deployment

Here we enumerate some of the sanity checks that were carried out to ensure that the deployment was successfully deployed with the custom-built image.  

  1. We checked that the version of the deployed Db2 WH MPP Db2uinstance matches the version specified in the db2u-release-external configmap 
     

    > oc get po c-db2wh-mpp-demo-db2u-0 -ojsonpath='{"Container: "}{.spec.containers[].name}{"\nImage: "}{.spec.containers[].image}{"\n"}' Container: db2u Image: image-registry.openshift-image-registry.svc:5000/tsm/db2u:12.1.3.0-tsmv1 

     
    And that matches with the version specified in the db2u-release-external ConfigMap 
     

    > oc get cm db2u-release-external -oyaml | grep -A2 -E "s12.1.3.0" "s12.1.3.0": { "images": { "db2wh": "image-registry.openshift-image-registry.svc:5000/tsm/db2u:12.1.3.0-tsmv1" 

     

  1. Following that, we validated that Db2 pods and other related pods are in running and ready states

    > oc get po  
    NAME                                              READY STATUS  RESTARTS AGE  
    c-db2wh-mpp-demo-db2u-0                           1/1   Running    0    2d19h  
    c-db2wh-mpp-demo-db2u-1                           1/1   Running    0    2d19h  
    c-db2wh-mpp-demo-db2u-2                           1/1   Running    0    2d19h  
    c-db2wh-mpp-demo-etcd-0                           1/1   Running    0    2d20h  
    c-db2wh-mpp-demo-etcd-1                           1/1   Running    0    2d20h  
    c-db2wh-mpp-demo-etcd-2                           1/1   Running    0    2d20h  
    c-db2wh-mpp-demo-ldap-5f5556799-4tjwk             1/1   Running    0    2d20h  
    c-db2wh-mpp-demo-restore-morph-f86jl              0/1   Completed  0    2d19h  
    c-db2wh-mpp-demo-tools-7f8b7794d9-xc642           1/1   Running    0    2d20h  
    db2u-day2-ops-controller-manager-87df5df58-qp88j  1/1   Running    0    2d22h  
    db2u-operator-manager-89bfb8558-mk88m             1/1   Running    0    2d22h 

  2. With the pods in healthy state, we checked that the installed TSM client API packages are present in the default directory 

    > oc exec c-db2wh-mpp-demo-db2u-0 -- ls -la /opt/tivoli/tsm/client/api/bin64 Defaulted container "db2u" out of: db2u, instdb (init), init-kernel (init)  
    total 141900  
    drwxr-xr-x. 1 root bin 36 Mar 28 19:59 .  
    drwxr-xr-x. 1 root bin 19 Mar 28 15:29 ..  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 CS_CZ  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 DE_DE  
    -r-x------. 1 root bin 15330864 Jun 4 2024 dsmcert  
    lrwxrwxrwx. 1 root root 56 Mar 28 19:59 dsm.opt -> /mnt/blumeta0/configmap/external/db2u-tsm-config/dsm.opt  
    -r--r--r--. 1 root bin 706 Jun 4 2024 dsm.opt.smp  
    lrwxrwxrwx. 1 root root 56 Mar 28 19:59 dsm.sys -> /mnt/blumeta0/configmap/external/db2u-tsm-config/dsm.sys  
    -r--r--r--. 1 root bin 915 Jun 4 2024 dsm.sys.smp  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 EN_US  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 ES_ES  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 FR_FR  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 HU_HU  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 IT_IT  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 JA_JP  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 KO_KR  
    -r-xr-xr-x. 1 root bin 14964208 Jun 4 2024 libApiTSM64.so  
    -r-xr-xr-x. 1 root bin 4627888 Jun 4 2024 libcrypto.so.3  
    -r-xr-xr-x. 1 root bin 36260 Jun 4 2024 libdmapi.so  
    -r-xr-xr-x. 1 root bin 97637 Jun 4 2024 libgpfs.so  
    -r-xr-xr-x. 1 root bin 692384 Jun 4 2024 libssl.so.3  
    lrwxrwxrwx. 1 root bin 14 Jun 5 2024 libTsmViSdkAPI.so -> libTsmViSdk.so  
    -r-xr-xr-x. 1 root bin 83991568 Jun 4 2024 libTsmViSdk.so  
    -r-xr-xr-x. 1 root bin 25444280 Jun 4 2024 libtsmxerces-c-3.2.so  
    lrwxrwxrwx. 1 root bin 14 Jun 5 2024 libVMcrypto.so -> libcrypto.so.3  
    lrwxrwxrwx. 1 root bin 11 Jun 5 2024 libVMssl.so -> libssl.so.3  
    -r-xr-xr-x. 1 root bin 98080 Jun 4 2024 libxmlutil-8.1.23.0.so  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 PL_PL  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 PT_BR  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 RU_RU  
    drwxr-xr-x. 2 root bin 4096 Mar 28 15:29 sample  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 ZH_CN  
    drwxr-xr-x. 2 root bin 137 Mar 28 15:29 ZH_TW 

  3. In addition, we checked that the installed IBM Global Security Kit (GSKit) package files are present in the default install location: 

    > oc exec c-db2wh-mpp-demo-db2u-0 -- ls -la /usr/local/ibm/gsk8_64/  
    Defaulted container "db2u" out of: db2u, instdb (init), init-kernel (init)  
    total 16  
    drwxr-xr-x. 6 root root 88 Mar 28 15:29 .  
    drwxr-xr-x. 3 root root 21 Mar 28 15:29 ..  
    drwxr-xr-x. 2 root sys 46 Mar 28 15:29 bin  
    -rwxr-xr-x. 1 root sys 211 Mar 28 2023 copyright  
    drwxr-xr-x. 2 root sys 6 Mar 28 2023 docs  
    drwxr-xr-x. 2 root sys 6 Mar 28 2023 inc  
    drwxr-xr-x. 4 root sys 4096 Mar 28 15:29 lib64  
    -rwxr-xr-x. 1 root sys 5616 Mar 28 2023 ReadMe.txt 

  4. And finally, we confirmed that the TSM configuration files in the default install directory are symlinked to the files mounted via the ConfigMap volume db2u-tsm-config and the corresponding contents of those files are indeed present and correct. 

    dsm.opt
    > oc exec c-db2wh-mpp-demo-db2u-0 -- ls -la /opt/tivoli/tsm/client/api/bin64 | grep "dsm.opt"  
    Defaulted container "db2u" out of: db2u, instdb (init), init-kernel (init)  
    lrwxrwxrwx. 1 root root 56 Mar 28 19:59 dsm.opt -> /mnt/blumeta0/configmap/external/db2u-tsm-config/dsm.opt  
    
    > oc exec c-db2wh-mpp-demo-db2u-0 -- cat /opt/tivoli/tsm/client/api/bin64/dsm.opt Defaulted container "db2u" out of: db2u, instdb (init), init-kernel (init)  
    SErvername db2inst1.tsm04 


    dsm.sys

    > oc exec c-db2wh-mpp-demo-db2u-0 -- ls -la /opt/tivoli/tsm/client/api/bin64 | grep "dsm.sys"  
    Defaulted container "db2u" out of: db2u, instdb (init), init-kernel (init)  
    lrwxrwxrwx. 1 root root 56 Mar 28 19:59 dsm.sys -> /mnt/blumeta0/configmap/external/db2u-tsm-config/dsm.sys 
     
    > oc exec c-db2wh-mpp-demo-db2u-0 -- cat /opt/tivoli/tsm/client/api/bin64/dsm.sys  
    Defaulted container "db2u" out of: db2u, instdb (init), init-kernel (init)  
    SErvername tsm04  
    TCPServeraddress tsm04.raleigh.ibm.com  
    COMMMethod TCPip  
    PASSWORDACCESS generate  
    TCPPort 1500  
    SSL no  
    resourceutilization 6  
    NODENAME c-db2wh-mpp-demo-db2u-0 
    
    SErvername db2inst1.tsm04  
    COMMmethod TCPip  
    TCPPort 1500  
    SSL no  
    TCPServeraddress tsm04.raleigh.ibm.com  
    PASSWORDACCESS generate  
    PASSWORDDIR /mnt/blumeta0/tsm/api/c-db2wh-mpp-demo-db2u-0/cred_store errorlogname /mnt/blumeta0/db2/log/db2-dsmerror.log  
    resourceutilization 6  
    NODENAME c-db2wh-mpp-demo-db2u-0 

TSM Client-Server Registration 

It’s important to ensure that all the Pods (nodes) from the Db2 WH deployment are registered with the TSM server. For example, the setup used in this study has the following Pods registered with the server:

  • c-db2wh-mpp-demo-db2u-0,
  • c-db2wh-mpp-demo-db2u-1 
  • c-db2wh-mpp-demo-db2u-2

Failure to register all the Pods/nodes with the TSM server will result in failed backup for data partitions that are on the unregistered node(s). In addition, the nodes must be registered with the right number of open sessions that correspond to the number of partitions per node/Pod. 

Backing up Db2 with TSM

After validating the deployment, we configured the backup settings and attempted a simple backup scenario using TSM client built-into the Db2 custom image: 

  1. Logged into the Db2 Pod to create a directory for TSM and assign ownership of relevant files

    oc exec -it c-db2u-smp-db2u-0 -- bash  
    cd /mnt/blumeta0  
    mkdir tsm  
    TSMDIR="/mnt/blumeta0/tsm"  
    TSM_API_DIR="${TSMDIR}/api"  
    mkdir -p ${TSM_API_DIR}/$(hostname -s)  
    mkdir -p ${TSM_API_DIR}/$(hostname -s)/cred_store  
    sudo chown db2inst1:db2iadm1 ${TSM_API_DIR}/$(hostname -s)/cred_store 
    

  2. We switched to db2inst1 user (or the configured db2 instance owner): 

    su - db2inst1 


  3. Using the db2inst1 user profile, we created a log file for TSM as specified in the dsm.sys configuration file

    touch /mnt/blumeta0/db2/log/db2-dsmerror.log

  4. Next, we added necessary environmental variables in the db2inst1 user profile as shown: 


    cat <<EOF > ${BLUMETAHOME}/db2inst1/sqllib/userprofile  
    export GSK_STRICTCHECK_CBCPADBYTES=GSK_TRUE DSMI_DIR=/opt/tivoli/tsm/client/api/bin64 DSMI_CONFIG=/opt/tivoli/tsm/client/api/bin64/dsm.opt DSMI_LOG=$DIAGPATH export DSMI_DIR DSMI_CONFIG DSMI_LOG 
     
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/tivoli/tsm/client/api/bin64 EOF

  5. We then executed a few commands to ensure that the newly added environmental variables are picked up. For example,

    db2 terminate 
    db2 force applications all 
    db2 deactivate db bludb 
    
    db2stop force 
    rah ipclean -a 
    exit  
    su - db2inst1 
    db2start 
    db2 activate db bludb 

    Note:
    Substitute the database name(s) accordingly.

  6. Set up password to authenticate the Pod/node with the TSM server

    /mnt/blumeta0/home/db2inst1/sqllib/adsm/dsmapipw 


    You should see a prompt like the following: 

    [db2inst1@c-db2wh-mpp-demo-db2u-0 - Db2U db2inst1]$ /mnt/blumeta0/home/db2inst1/sqllib/adsm/dsmapipw 
    
    Tivoli Storage Manager * 
    API Version = 8.1.26 * 
    
    Enter your current password: Enter your new password: Enter your new password again: 
    Your new password has been accepted and updated.
  7. Backup database to the TSM Server

    db2 "backup db bludb on all dbpartitionnums online use tsm" 
    Part  Result 
    ----  ------------------------------------------------------------------------ 
    0000  DB20000I  The BACKUP DATABASE command completed successfully. 
    0001  DB20000I  The BACKUP DATABASE command completed successfully. 
    0002  DB20000I  The BACKUP DATABASE command completed successfully. 
    0003  DB20000I  The BACKUP DATABASE command completed successfully. 
    0004  DB20000I  The BACKUP DATABASE command completed successfully. 
    0005  DB20000I  The BACKUP DATABASE command completed successfully. 
      
    Backup successful. The timestamp for this backup image is: 20250331144003

Implications and Expectations of the Design 

For organizations and teams to be able to build custom image on top of Db2, there are a few important considerations that are worth mentioning: 

  • Organizations and customers need to reach out to the Db2 Containerization team at IBM to understand their use case and provide well-validated Dockerfile with instructions on how to use the file 

  • Custom images built with the provided Dockerfile must be pushed to the internal OpenShift registry or to a specific image registry being used if the internal registry is not allowed 

  • Customers are expected to ensure that the Pod on which TSM client is running is registered with the corresponding TSM server that would be used for the backup operation 

  • Customers must create the ConfigMap db2u-release-external with the url of the custom image as mentioned previously. It's important to use the name db2u-release-external as the configmap name 

  • Any version updates must be validated using the custom-built image. Customers are responsible for ensuring that the update functions correctly with their custom image. 

  • Building custom Db2 images on top third-party software is not supported. Customers will be solely responsible for any changes made to Db2U code or Db2 install, or any RPMs that impact the functionality of Db2/Db2U code. 

  • Customers leveraging K8s/OCP resource backup-and-restore solutions such as OADP (OpenShift APIs for Data Protection), Velero must ensure this external-release CM is also included in the namespace-scoped resource backups. Otherwise on restore, any Db2uInstance CR instances referencing the custom TSM image will error out since that image is not accessible to the Db2 Operator even when it’s available on the image registry. 


Limitations of Building Custom Db2 Image with Third-Party Software 

BYOI only supports building custom third-party image on top Db2 OLTP or Db2WH image and not the other way round. That is, building custom Db2 image on top of third-party images is not supported for the following reasons: 

  • Db2U comprises of several components that are independently validated with all forms of testing via well-developed and internally approved build pipelines and as such any build done outside of the internally approved processes cannot be certified.  

  • Release engineering (Db2 Containerization) must certify every build and any build that does not pass through the certifying mechanisms is technically in violation of enterprise build policies. 

  • Issues arising from externally built Db2 images will be very difficult to troubleshoot and resolve since IBM development team will have no knowledge of the inherent build process. 

  • Building a custom Db2 image introduces potential security risks and exposes the database engine to unauthorized changes. This approach deviates from the best practices of maintaining immutable infrastructure for containerized applications, where consistency, traceability, and minimized attack surfaces are critical. 


Conclusion 

In this study, we discussed how customers can build custom applications on top of Db2 OLTP or Db2Wh using the BYOI approach. This approach allows customers to layer their own custom image on-top-of Db2 image by creating an external-release ConfigMap called db2u-release-external that allows customers to specify the custom-built image. As an example, we demonstrated how BYOI can be done for TSM client with step-by-step deployment instruction 


About the Authors 

Aruna De Silva is the architect for Db2/Db2 Warehouse containerized offerings on IBM Cloud Pack for Data, OpenShift and Kubernetes. He has nearly two decades of database technology experience and is based off IBM Toronto software laboratory. 

Since 2015, he has been actively involved with modernizing Db2 bringing Db2 Warehouse – Common Container, the first containerized Db2 solution out into production in 2016. Since 2019, he has been primarily focused on bringing the success of Db2 Warehouse into cloud native platforms such as OpenShift and Kubernetes while embracing micro service architecture and deployment patterns. He can be reached at adesilva@ca.ibm.com. 

Labi Adekoya is a Reliability Engineer working on containerized Db2 Warehouse offerings. With over 12 years of experience, he focuses on building and demystifying reliable distributed systems. He can be reached at owolabi.adekoya@ibm.com.
 

Austin Clifford is a Senior Technical Staff Member in Hybrid Data Management based in the Ireland Lab. He has worked with database, data lake and warehousing technologies for more than two decades. In 2012, Austin led the team to achieve a Guinness World Record for the Largest Data Warehouse, a record that IBM held for two years. Austin has authored numerous papers and patents, advises clients on data warehouse, analytics and containerization best practices and is a regular speaker at technical conferences. He can be reached at acliffor@ie.ibm.com. 

0 comments
12 views

Permalink