Now that we are clear on the NCOS definition for Db2Wh on EKS and on how OpenEBS will provide the LVM aggregation for local on-disk caching tier and system temp tablespace data storage, we deploy Db2Wh with the Db2uInstance CR. Db2uInstance is a Kubernetes Custom Resource (CR) object that provides a declarative way of deploying the different components required for standing up the Db2 engine with a YAML configuration file. Following is an example of a YAML definition we used for creating a Db2uInstance CR object:
apiVersion: db2u.databases.ibm.com/v1
kind: Db2uInstance
metadata:
name: nvme-db2whmpp
namespace: db2u
spec:
version: s12.1.2.0
nodes: 4
advOpts:
cosProvider: "aws"
enableCos: "true"
podTemplate:
db2u:
resource:
db2u:
limits:
cpu: "12"
memory: "112Gi"
environment:
dbType: db2wh
databases:
- name: BLUDB
partitionConfig:
total: 24
volumePerPartition: true
authentication:
ldap:
enabled: true
license:
accept: true
storage:
- name: meta
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Gi
storageClassName: efs-terraform-sc
type: create
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
storageClassName: ebs-terraform-sc
type: template
- name: backup
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Gi
storageClassName: efs-terraform-sc
type: create
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Gi
storageClassName: openebs-db2usystemp-lvm
type: template
- name: etcd
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ebs-terraform-sc
type: template
- name: archivelogs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Gi
storageClassName: efs-terraform-sc
type: create
- name: cachingtier
type: template
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 850Gi
storageClassName: openebs-db2ucaching-lvm
A careful look at the CR defined above reveals a storage CSI specification that combines the EFS, EBS and OpenEBS as indicated with the doted rectangular shape in Fig 2. A successful deployment shows all the Db2 Pods in ready and running states as shown in the sample output below:
NAME READY STATUS RESTARTS AGE
c-nvme-db2whmpp-db2u-0 1/1 Running 0 25h
c-nvme-db2whmpp-db2u-1 1/1 Running 0 25h
c-nvme-db2whmpp-db2u-2 1/1 Running 0 25h
c-nvme-db2whmpp-etcd-0 1/1 Running 0 25h
c-nvme-db2whmpp-etcd-1 1/1 Running 0 25h
c-nvme-db2whmpp-etcd-2 1/1 Running 0 25h
c-nvme-db2whmpp-ldap-988d687b5-nkg8f 1/1 Running 0 25h
c-nvme-db2whmpp-restore-morph-jmzlw 0/1 Completed 0 25h
c-nvme-db2whmpp-tools-864d49886-76lwd 1/1 Running 0 25h
db2u-day2-ops-controller-manager-6dff47f8f8-q8k98 1/1 Running 0 27h
db2u-operator-manager-649fcd9b77-xnhf6 1/1 Running 0 27h
A detailed step by step guide of the different deployment scenarios considered in this study is the focus of the next section.
How to configure VG+LVM for Db2’s NCOS Use Cases
In this section, we present a step-by-step guide to replicating various VG/LVM strategies tailored to different Db2 NCOS use cases. We begin by demonstrating how to create separate VGs for caching tier and temporary tablespaces (tempts). Next, we show how a single VG can be configured to support both storage types. We then explore the approach of partitioning VGs from a single Physical Volume (PV), followed by a walkthrough of enabling thin provisioning to optimize space utilization. Each strategy is designed to offer flexibility depending on specific workload and node requirements.
Prerequisites for all use cases
Before proceeding to implementing any of the use cases discussed below, ensure that the following steps are taken to install helm (if it is not running already) and, setup and install OpenEBS.
-
Validate that OpenEBS installation is successful by checking that OpenEBS is running and in ready state
-
Check that Db2 operator is installed and is in ready state. If not already installed, refer to the official documentation on how to install Db2 operator. A validation output will be like the following output:
kubectl get clusterserviceversion
NAME DISPLAY VERSION REPLACES PHASE
db2u-operator.v120102.0.0 IBM Db2 120102.0.0 db2u-operator.v120101.0.0 Succeeded
Use Case 1: Using distinct NVMe devices
Here we show how distinct NVMe devices can be used for caching tier and tempts on EKS.

Fig 3: Using distinct NVMe devices with separate Volume Groups
- Check NVMe devices are available on the cluster nodes. As stated previously, this study is based on a 5-cluster node EKS set up and each node has two NVMe devices respectively. A sample output is shown thus:
lsblk | grep nvme
nvme1n1 259:0 0 3.4T 0 disk
nvme0n1 259:1 0 100G 0 disk
├─nvme0n1p1 259:2 0 100G 0 part /
└─nvme0n1p128 259:3 0 1M 0 part
nvme2n1 259:4 0 3.4T 0 disk
- From the output in step 1, nvme1n1 and nvme2n1 devices are available. So, create Physical Volumes (PV) for them – one for caching tier and the other for tempts:
pvcreate /dev/nvme1n1
pvcreate /dev/nvme2n1
- Next, create Volume Groups (VGs) for the PVs created previously:
vgcreate db2ucaching_vg /dev/nvme1n1
vgcreate db2usystemp_vg /dev/nvme2n1
- Create corresponding SCs with the VGs specified in parameters and in allowedTopologies:
db2ucaching:
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-db2ucaching-lvm
allowVolumeExpansion: true
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "db2ucaching_vg"
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/lvmvg-caching
values:
- db2ucaching_vg
EOF
db2usystemp
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-db2usystemp-lvm
allowVolumeExpansion: true
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "db2usystemp_vg"
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/lvmvg-temp
values:
- db2usystemp_vg
EOF
- Label nodes with the same key-value defined for the allowedTopologies in the SCs:
kubectl label node ip-10-0-108-141.us-east-2.compute.internal openebs.io/lvmvg-caching=db2ucaching_vg
kubectl label node ip-10-0-108-141.us-east-2.compute.internal openebs.io/lvmvg-temp=db2usystemp_vg
Kubernetes only allows one value per label, so different label keys must be used for the VGs. And the labels must be applied on all relevant nodes.
- The node labels are not read by default by OpenEBS LVM plugin. The plugin needs to be updated with the label keys. Therefore, edit the openebs-lvm-localpv-node DaemonSet by updating the ALLOWED_TOPOLOGIES environment variable to include the label keys and wait for the DaemonSet to rollout completely on all the nodes.
kubectl edit ds openebs-lvm-localpv-node -n openebs
And replace the environment variable ALLOWED_TOPOLOGIES with the following in the openebs-lvm-plugin container spec.
Replace this:
- name: ALLOWED_TOPOLOGIES
value: kubernetes.io/hostname
With the keys defined for allowedTopologies in the SCs. For example:
- name: ALLOWED_TOPOLOGIES
value: kubernetes.io/hostname,openebs.io/lvmvg-caching,openebs.io/lvmvg-temp
And then wait for the pods to rollout on all nodes
- Confirm that the labels have been picked up by OpenEBS LVM plugin:
kubectl logs -n openebs -l app=openebs-lvm-node -c openebs-lvm-plugin | grep accessible_topology
A log entry such as that below confirms that the node labels are now updated
I0529 18:42:38.559328 1 grpc.go:81] GRPC response: {"accessible_topology":{"segments":{"kubernetes.io/hostname":"ip-10-0-108-141.us-east-2.compute.internal","openebs.io/lvmvg-caching":"db2ucaching_vg","openebs.io/lvmvg-temp":"db2usystemp_vg","openebs.io/nodename":"ip-10-0-108-141.us-east-2.compute.internal"}},"node_id":"ip-10-0-108-141.us-east-2.compute.internal"}
- Deploy Db2Wh CR with the storage resources updated accordingly. In this study, we used the YAML definition previously mentioned. The application PersistentVolumeClaims (PVCs) should bind successfully to the corresponding PersistentVolumes (PVs), and the pods start successfully. Example output shown below
PVCs
kubectl get pvc | grep openebs
cachingtier-c-nvme-db2whmpp-db2u-0 Bound pvc-351c8546-8104-42a2-bf96-65ce6ad07952 850Gi RWO openebs-db2ucaching-lvm 25h
cachingtier-c-nvme-db2whmpp-db2u-1 Bound pvc-e6c71dd3-4b13-4d04-9b5e-ca9297325b46 850Gi RWO openebs-db2ucaching-lvm 25h
cachingtier-c-nvme-db2whmpp-db2u-2 Bound pvc-a6607c5e-45a2-4b77-925d-370c123571ec 850Gi RWO openebs-db2ucaching-lvm 25h
tempts-c-nvme-db2whmpp-db2u-0 Bound pvc-5a973e80-ffe8-47fe-8786-521f827971c0 300Gi RWO openebs-db2usystemp-lvm 25h
tempts-c-nvme-db2whmpp-db2u-1 Bound pvc-bc15cff7-095f-410d-965c-3aec7bf68b68 300Gi RWO openebs-db2usystemp-lvm 25h
tempts-c-nvme-db2whmpp-db2u-2 Bound pvc-bc62b0ea-0f58-4fb0-b485-d945042b6dbd 300Gi RWO openebs-db2usystemp-lvm 25h
PVs
kubectl get pv | grep openebs
pvc-351c8546-8104-42a2-bf96-65ce6ad07952 850Gi RWO Delete Bound db2u/cachingtier-c-nvme-db2whmpp-db2u-0 openebs-db2ucaching-lvm 25h
pvc-5a973e80-ffe8-47fe-8786-521f827971c0 300Gi RWO Delete Bound db2u/tempts-c-nvme-db2whmpp-db2u-0 openebs-db2usystemp-lvm 25h
pvc-a6607c5e-45a2-4b77-925d-370c123571ec 850Gi RWO Delete Bound db2u/cachingtier-c-nvme-db2whmpp-db2u-2 openebs-db2ucaching-lvm 25h
pvc-bc15cff7-095f-410d-965c-3aec7bf68b68 300Gi RWO Delete Bound db2u/tempts-c-nvme-db2whmpp-db2u-1 openebs-db2usystemp-lvm 25h
pvc-bc62b0ea-0f58-4fb0-b485-d945042b6dbd 300Gi RWO Delete Bound db2u/tempts-c-nvme-db2whmpp-db2u-2 openebs-db2usystemp-lvm 25h
pvc-e6c71dd3-4b13-4d04-9b5e-ca9297325b46 850Gi RWO Delete Bound db2u/cachingtier-c-nvme-db2whmpp-db2u-1 openebs-db2ucaching-lvm 25h
Db2 Pods
kubectl get po
NAME READY STATUS RESTARTS AGE
c-nvme-db2whmpp-db2u-0 1/1 Running 0 25h
c-nvme-db2whmpp-db2u-1 1/1 Running 0 25h
c-nvme-db2whmpp-db2u-2 1/1 Running 0 25h
c-nvme-db2whmpp-etcd-0 1/1 Running 0 25h
c-nvme-db2whmpp-etcd-1 1/1 Running 0 25h
c-nvme-db2whmpp-etcd-2 1/1 Running 0 25h
c-nvme-db2whmpp-ldap-988d687b5-nkg8f 1/1 Running 0 25h
c-nvme-db2whmpp-restore-morph-jmzlw 0/1 Completed 0 25h
c-nvme-db2whmpp-tools-864d49886-76lwd 1/1 Running 0 25h
db2u-day2-ops-controller-manager-6dff47f8f8-q8k98 1/1 Running 0 27h
db2u-operator-manager-649fcd9b77-xnhf6 1/1 Running 0 27h
From the output of the PVC/PV, we can see that the volumes are provisioned with OpenEBS via the SCs defined previously, and that confirms that distinct VGs are used for caching tier and tempts data.
Use Case 2: Using single Volume Group with one or more NVMe devices
The only difference between this use case and use case 1 above is that this shares a single Volume Group with one or more NVMe devices for caching tier and tempts data.

Fig 4: Using a single Volume Group with one or more NVMe devices
Here are the steps involved:
- Check the NVMe devices available and choose one of them (if more than one). For example, using similar output as in use case 1:
lsblk | grep nvme
nvme1n1 259:0 0 3.4T 0 disk
nvme0n1 259:1 0 100G 0 disk
├─nvme0n1p1 259:2 0 100G 0 part /
└─nvme0n1p128 259:3 0 1M 0 part
nvme2n1 259:4 0 3.4T 0 disk
- Here, since we have two devices per node, we used one (nvme1n1) of them to create a Physical Volume. For example:
pvcreate /dev/nvme1n1
If you want to consolidate the secondary NVMe device (nvme2n1) create Physical Volume for it as well
pvcreate /dev/nvme2n1
- Create a VG for the PV(s) created in 2
vgcreate db2u_vg /dev/nvme1n1
If you are consolidating both Physical Volumes/NVMe devices into single storage pool, issue vgcreate command with both PVs
vgcreate db2u_vg /dev/nvme1n1 /dev/nvme2n1
- Create SC from the VG created previously:
db2u (we create a single SC here)
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-db2u-lvm
allowVolumeExpansion: true
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "db2u_vg"
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/lvmvg-db2u
values:
- db2u_vg
EOF
The key thing to note in the SC defined above is that the storage types will be sharing the same VG as configured in the SC.
- Label nodes with the same key-value defined in the SCs
kubectl label node ip-10-0-108-141.us-east-2.compute.internal openebs.io/lvmvg-db2u=db2u_vg
- Refer to steps 6 and 7 of use case 1 to ensure that the node label is read by OpenEBS LVM plugin
- Deploy Db2Wh CR with the storage configuration updated for caching tier and tempts. For this study, we used the same YAML defined in use case 1 but with the storage for tier and tempts updated to reference the SC created previously
...
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Gi
storageClassName: openebs-db2u-lvm
type: template
- name: cachingtier
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 850Gi
storageClassName: openebs-db2u-lvm
type: template
...
Examples of sample data output:
PVCs
kubectl get pvc | grep openebs
cachingtier-c-nvme-db2whmpp-db2u-0 Bound pvc-351c8546-8104-42a2-bf96-65ce6ad07952 850Gi RWO openebs-db2u-lvm 17h
cachingtier-c-nvme-db2whmpp-db2u-1 Bound pvc-e6c71dd3-4b13-4d04-9b5e-ca9297325b46 850Gi RWO openebs-db2u-lvm 17h
cachingtier-c-nvme-db2whmpp-db2u-2 Bound pvc-a6607c5e-45a2-4b77-925d-370c123571ec 850Gi RWO openebs-db2u-lvm 17h
tempts-c-nvme-db2whmpp-db2u-0 Bound pvc-5a973e80-ffe8-47fe-8786-521f827971c0 300Gi RWO openebs-db2u-lvm 17h
tempts-c-nvme-db2whmpp-db2u-1 Bound pvc-bc15cff7-095f-410d-965c-3aec7bf68b68 300Gi RWO openebs-db2u-lvm 17h
tempts-c-nvme-db2whmpp-db2u-2 Bound pvc-bc62b0ea-0f58-4fb0-b485-d945042b6dbd 300Gi RWO openebs-db2u-lvm 17h
PVs
kubectl get pv | grep openebs
pvc-351c8546-8104-42a2-bf96-65ce6ad07952 850Gi RWO Delete Bound db2u/cachingtier-c-nvme-db2whmpp-db2u-0 openebs-db2u-lvm 17h
pvc-5a973e80-ffe8-47fe-8786-521f827971c0 300Gi RWO Delete Bound db2u/tempts-c-nvme-db2whmpp-db2u-0 openebs-db2u-lvm 17h
pvc-a6607c5e-45a2-4b77-925d-370c123571ec 850Gi RWO Delete Bound db2u/cachingtier-c-nvme-db2whmpp-db2u-2 openebs-db2u-lvm 17h
pvc-bc15cff7-095f-410d-965c-3aec7bf68b68 300Gi RWO Delete Bound db2u/tempts-c-nvme-db2whmpp-db2u-1 openebs-db2u-lvm 17h
pvc-bc62b0ea-0f58-4fb0-b485-d945042b6dbd 300Gi RWO Delete Bound db2u/tempts-c-nvme-db2whmpp-db2u-2 openebs-db2u-lvm 17h
pvc-e6c71dd3-4b13-4d04-9b5e-ca9297325b46 850Gi RWO Delete Bound db2u/cachingtier-c-nvme-db2whmpp-db2u-1 openebs-db2u-lvm 17h
Db2 Pods
kubectl get po
NAME READY STATUS RESTARTS AGE
c-nvme-db2whmpp-db2u-0 1/1 Running 0 17h
c-nvme-db2whmpp-db2u-1 1/1 Running 0 17h
c-nvme-db2whmpp-db2u-2 1/1 Running 0 17h
c-nvme-db2whmpp-etcd-0 1/1 Running 0 17h
c-nvme-db2whmpp-etcd-1 1/1 Running 0 17h
c-nvme-db2whmpp-etcd-2 1/1 Running 0 17h
c-nvme-db2whmpp-ldap-966d917b5-nkg8f 1/1 Running 0 17h
c-nvme-db2whmpp-restore-morph-jmzlw 0/1 Completed 0 17h
c-nvme-db2whmpp-tools-998d57889-72kpq 1/1 Running 0 17h
db2u-day2-ops-controller-manager-6dff47f8f8-q8k98 1/1 Running 0 17h
db2u-operator-manager-649fcd9b77-xnhf6 1/1 Running 0 17h
Use Case 3: Using Partitioned VGs from a single NVMe device
This use case involves partitioning a single NVMe device into two and then creating VGs over those partitions. It is like the use case 1 except for the device partitioning steps. So, we provide steps to partition the device and then refer to the steps in the use case 1 to complete the deployment process.

Fig 5: Using Partitioned Volume Groups from a single NVMe device
Here are the steps to partition a device:
- Take one of the devices, say /dev/nvme1n1, and partition it into two thus:
parted /dev/nvme1n1
Then inside the prompt, enter the following:
mklabel gpt
mkpart primary 0% 50%
mkpart primary 50% 100%
print
The above example partitions the disk into two equal sizes. You can divide the partition based on your storage use-case, such as 30% to NCOS caching tier vs 70% to system temps.
- Verify that the device has been partitioned. For a successful partitioning, the output should be like that below:
lsblk | grep nvme
nvme1n1 259:0 0 1.7T 0 disk
├─nvme1n1p1 259:6 0 884.8G 0 part
└─nvme1n1p2 259:7 0 884.8G 0 part
ls -la /dev/nvme1n1*
brw-rw---- 1 root disk 259, 0 Jun 3 14:33 /dev/nvme1n1
brw-rw---- 1 root disk 259, 6 Jun 3 15:35 /dev/nvme1n1p1
brw-rw---- 1 root disk 259, 7 Jun 3 15:35 /dev/nvme1n1p2
The output shows that the device has been partitioned into 2 parts of ~885G size each.
- Create PVs with the partitions
pvcreate /dev/nvme1n1p1
pvcreate /dev/nvme1n1p2
- Create corresponding VGs with the PVs
vgcreate db2ucaching_vg /dev/nvme1n1p1
vgcreate db2usystemp_vg /dev/nvme1n1p2
And confirm that the creation succeeds; output should be like that below:
PVs:
PV VG Fmt Attr PSize PFree
/dev/nvme1n1p1 db2ucaching_vg lvm2 a-- 884.75g 884.75g
/dev/nvme1n1p2 db2usystemp_vg lvm2 a-- 884.75g 884.75g
VGs
VG #PV #LV #SN Attr VSize VFree
db2ucaching_vg 1 0 0 wz--n- 884.75g 884.75g
db2usystemp_vg 1 0 0 wz--n- 884.75g 884.75g
Note that these steps (1-4) must be completed on all the participating nodes.
- Follow steps 4-8 in use case 1 to complete the deployment processes.
Use Case 4: Enabling Thin Provisioning
Thin provisioning helps with efficient space management by allocating space only when it is needed (or requested) by applications for data storage rather than reserving the space upfront. Any of the previously mentioned use cases can be configured for thin provisioning; the only change that is required is enabling thin provisioning in the SCs.
A common pattern is to use Thin provisioning to consolidate all available (local) NVMe devices into a single Storage Pool via one Volume Group (similar to use case 2) and then use a Thin Pool to provision storage for the application pods. This approach offers flexibility to add more storage devices to the Thin-provisioned Volume Group as needed, increasing the available capacity of the Thin Pool – all transparent to the application pods mounting Persistent Volumes (I.E., Thin Volumes) from that Volume Group.

Fig 6: Using Thin Provisioning with multiple NVMe devices
Therefore, to enable thin provisioning for any use case, the SC needs to be configured, for example, as shown below:
kubectl apply -f - <<EOF
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/lvmvg-caching
values:
- db2ucaching_vg
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-db2ucaching-lvm
parameters:
storage: lvm
thinProvision: "yes"
volgroup: db2ucaching_vg
provisioner: local.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF
The change in the SC shown above is the setting thinProvision: "yes". Any of the previous VG/LVM strategy use cases can use this setting in their respective Storage Classes and enable thin provisioning.
Upon successful deployment and checking the nodes, you will observe that thinpool volumes are created for the PVCs as expected like an output shown below:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 1.7T 0 disk
├─nvme1n1p1 259:6 0 884.8G 0 part
├─db2ucaching_vg-db2ucaching_vg_thinpool_tmeta 253:1 0 52M 0 lvm
└─db2ucaching_vg-db2ucaching_vg_thinpool-tpool 253:4 0 50G 0 lvm
├─db2ucaching_vg-db2ucaching_vg_thinpool 253:5 0 50G 1 lvm
└─db2ucaching_vg-pvc--ef09375a--72e1--4728--bdb4--1511c5298765 253:6 0 50G 0 lvm /var/lib/kubelet/pods/29df74c9-4ec2-4609-9065-3ee3e1dc2dc7/volumes/kubernetes.iocsi/pvc-ef09375a-72e1-4728-bdb4-1511c5298765/mount
└─db2ucaching_vg-db2ucaching_vg_thinpool_tdata 253:2 0 50G 0 lvm
└─db2ucaching_vg-db2ucaching_vg_thinpool-tpool 253:4 0 50G 0 lvm
├─db2ucaching_vg-db2ucaching_vg_thinpool 253:5 0 50G 1 lvm
└─db2ucaching_vg-pvc--ef09375a--72e1--4728--bdb4--1511c5298765 253:6 0 50G 0 lvm /var/lib/kubelet/pods/29df74c9-4ec2-4609-9065-3ee3e1dc2dc7/volumes/kubernetes.iocsi/pvc-ef09375a-72e1-4728-bdb4-1511c5298765/mount
└─nvme1n1p2 259:7 0 884.8G 0 part
├─db2usystemp_vg-db2usystemp_vg_thinpool_tmeta 253:0 0 52M 0 lvm
└─db2usystemp_vg-db2usystemp_vg_thinpool-tpool 253:7 0 50G 0 lvm
├─db2usystemp_vg-db2usystemp_vg_thinpool 253:8 0 50G 1 lvm
└─db2usystemp_vg-pvc--2e74d144--ed47--4c20--af19--2eb564924c5c 253:9 0 50G 0 lvm /var/lib/kubelet/pods/29df74c9-4ec2-4609-9065-3ee3e1dc2dc7/volumes/kubernetes.iocsi/pvc-2e74d144-ed47-4c20-af19-2eb564924c5c/mount
└─db2usystemp_vg-db2usystemp_vg_thinpool_tdata 253:3 0 50G 0 lvm
└─db2usystemp_vg-db2usystemp_vg_thinpool-tpool 253:7 0 50G 0 lvm
├─db2usystemp_vg-db2usystemp_vg_thinpool 253:8 0 50G 1 lvm
└─db2usystemp_vg-pvc--2e74d144--ed47--4c20--af19--2eb564924c5c 253:9 0 50G 0 lvm /var/lib/kubelet/pods/29df74c9-4ec2-4609-9065-3ee3e1dc2dc7/volumes/kubernetes.iocsi/pvc-2e74d144-ed47-4c20-af19-2eb564924c5c/mount
Conclusion
In this study, we evaluated the use of NVMe devices to optimize both space utilization and performance for Db2 Warehouse storage types, including NCOS caching tier and temporary tablespaces (tempts). NVMe devices on Amazon EKS and provided practical guidance on how each strategy can be implemented based on specific deployment scenarios.
Selecting the optimal VG/LVM configuration strategy for your Db2 Warehouse deployment will largely depend on the available node types – such as whether nodes have multiple NVMe devices – and your specific performance priorities. For instance, if your workload places greater emphasis on system temporary space performance than on NCOS caching, that priority should guide your configuration strategy. With that in mind, the general rule of thumb is as follows:
-
When two or more NVMe devices are available per worker node
Use Case 2: Creating a single Volume Group across multiple NVMe devices enables efficient utilization of all available drives without compromising performance. OpenEBS automatically allocates Logical Volumes (LVMs) based on the capacity defined in each storage section of the Custom Resource (CR). Additionally, thin provisioning can be applied to maximize space efficiency by allocating storage dynamically rather than upfront.
-
When only one NVMe device is available per worker node
Use Case 3: Partitioning the NVMe device into multiple Volume Groups provides fine-grained control over how storage is allocated between Db2’s NCOS caching tier and system temporary spaces. This approach is especially useful when workload characteristics demand prioritization – for example, if performance impact from join spills to system temps is a greater concern than access latency to objects on Cloud Object Storage.
About the Authors
Aruna De Silva is the architect for Db2/Db2 Warehouse containerized offerings on IBM Cloud Pack for Data, OpenShift and Kubernetes. He has nearly 20 years of database technology experience and is based off IBM Toronto software laboratory.
Since 2015, he has been actively involved with modernizing Db2 bringing Db2 Warehouse – Common Container, the first containerized Db2 solution out into production in 2016. Since 2019, he has been primarily focused on bringing the success of Db2 Warehouse into cloud native platforms such as OpenShift and Kubernetes while embracing micro service architecture and deployment patterns. He can be contacted at adesilva@ca.ibm.com.
Hamdi Roumani is a senior manager of the Db2 COS team, overseeing Db2's native cloud object storage support and the columnar storage engine. With extensive experience at IBM, Hamdi began his career on the availability team as a developer, contributing to numerous enhancements for Db2's backup, restore, and write-ahead logging facilities. He then transitioned to the newly formed IBM Cloud IAAS team, where he focused on building the next-generation platform layer, specifically the storage layer (object and block storage) and the monitoring framework used for billing. Recently, Hamdi returned to Db2 to work on the next-generation warehouse engine. He can be reached at roumani@ca.ibm.com.
Labi Adekoya is a Reliability Engineer working on containerized Db2 Warehouse offerings. With over 12 years of experience, he focuses on building and demystifying reliable distributed systems. He can be reached at owolabi.adekoya@ibm.com.