Deploying Ceph Storage and Integrating with Red Hat OpenShift for IBM Security Guardium Key Lifecycle Manager
Introduction
This document provides a step-by-step guide for deploying a highly available, 3-node Ceph storage cluster, incorporating best practices for stability, performance, and scalability, integrating with the Red Hat OpenShift cluster, followed by installing IBM Security Guardium Key Lifecycle Manager (SGKLM).
1. Pre-Requisites & Architecture Planning
1.1 Hardware Requirements
- Monitor Nodes: At least three nodes for quorum (Monitors).
- OSD Nodes: At least three OSDs per node for fault tolerance.
- Metadata Server (MDS): Required for CephFS deployments.
- Network:
- Public Network for client access (1 GbE/10 GbE recommended).
- Cluster Network for replication and OSD heartbeats (separate from public).
- Use bonding/LACP for high availability.
1.2 OS Requirements
- RHEL 8 / CentOS 9
- Set the hostname properly on all nodes.
- Set SELinux to permissive mode:
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
2. Install Ceph Packages & Configure SSH
2.1 Install Required Packages
Run the following on all nodes:
yum install epel-release -y
dnf install ceph ceph-common -y
2.2 Setup SSH Access
Generate SSH keys on the first node (node01
):
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""
Copy SSH keys to all nodes:
ssh-copy-id root@node01
ssh-copy-id root@node02
ssh-copy-id root@node03
Test passwordless SSH login:
ssh node02 "hostname"
ssh node03 "hostname"
3. Configure Ceph Cluster
3.1 Add Nodes to /etc/hosts
(on all nodes)
echo "10.221.192.195 node01" >> /etc/hosts
echo "10.221.192.196 node02" >> /etc/hosts
echo "10.221.192.197 node03" >> /etc/hosts
4. Bootstrap the Ceph Cluster
On node01
, initialize the cluster:
cephadm bootstrap --mon-ip 10.221.x.x --initial-dashboard-user admin --initial-dashboard-password admin123
Enable Ceph CLI access:
cephadm shell
Set up the admin keyring:
mkdir -p /etc/ceph
ceph config generate-minimal-conf > /etc/ceph/ceph.conf
ceph auth get-or-create client.admin mon 'allow *' osd 'allow *' mgr 'allow *' > /etc/ceph/ceph.keyring
5. Add Remaining Nodes to the Cluster
Copy the Ceph SSH key to node02
and node03
:
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node03
Add nodes to the cluster:
ceph orch host add node02 10.221.192.196
ceph orch host add node03 10.221.192.197
Verify cluster status:
ceph status
6. Deploy OSDs (Object Storage Daemons)
6.1 Check Available Devices
ceph orch device ls
6.2 Add OSDs
ceph orch daemon add osd node01:/dev/sdb
ceph orch daemon add osd node01:/dev/sdc
ceph orch daemon add osd node01:/dev/sdd
ceph orch daemon add osd node02:/dev/sdb
ceph orch daemon add osd node02:/dev/sdc
ceph orch daemon add osd node02:/dev/sdd
ceph orch daemon add osd node03:/dev/sdb
ceph orch daemon add osd node03:/dev/sdc
ceph orch daemon add osd node03:/dev/sdd
Or apply all available devices:
ceph orch apply osd --all-available-devices
Verify OSD status:
ceph osd tree
ceph -s
7. Enable & Configure Ceph Services
Start Ceph monitor service:
systemctl enable --now ceph-mon@node01
Start Ceph OSD services:
systemctl enable --now ceph-osd@node01
systemctl enable --now ceph-osd@node02
systemctl enable --now ceph-osd@node03
Check service status:
systemctl status ceph-mon@node01
systemctl status ceph-osd@node01
systemctl status ceph-osd@node02
systemctl status ceph-osd@node03
8. Create and Manage Ceph Storage Pools
8.1 Create a Ceph Filesystem
ceph fs volume create test
ceph fs ls
8.2 Create a Block Storage Pool (RBD)
ceph osd pool create rbd 128 128
rbd pool init rbd
List pools:
ceph osd lspools
9. Configure Ceph Dashboard
Enable the dashboard:
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
Set a new admin password:
ceph dashboard set-login-credentials admin NewPassword123
Find the dashboard URL:
ceph mgr services
Open Ceph cluster web UI and enter credentials to login

Ceph Dash board: Cretae rdb pool in Ceph cluster.

10. Performance Tuning & Monitoring
10.1 Set Optimal Replication Factor
ceph osd pool set test_pool size 3
10.2 Tune Ceph for Performance
Edit /etc/ceph/ceph.conf
and add:
[global]
osd_pool_default_pg_num = 256
osd_pool_default_pgp_num = 256
osd_crush_chooseleaf_type = 1
Restart Ceph services:
systemctl restart ceph-osd.target
10.3 Enable Monitoring
ceph -s
ceph osd tree
ceph df
ceph health detail
Creating Openshift data Foundation cluster for external Ceph storage:
Procedure
-
Click Operators Installed Operators to view all the installed operators.
-

-
Ensure that the Project selected is openshift-storage
.
- Click OpenShift Data Foundation and then click Create StorageSystem.
-
In the Backing storage page, select the following options:
- Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select Ceph Storage for Storage platform.
- Click Next.
-
In the Connection details page, provide the necessary information:
- Click on the Download Script link to download the python script for extracting Ceph cluster details.
-
For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the admin key
.
-
Run the following command on the RHCS node to view the list of available arguments:
# python3 ceph-external-cluster-details-exporter.py --help
For example:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name gklm
Example of JSON output generated using the python script:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
Save the JSON output to a file with .json
extension
- In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources.
- Verify that
Status
of StorageCluster
is Ready
and has a green tick.

Verify the state of the pods:

Health of OpenShift Data Foundation cluster using the object dashboard.

Verify that storage cluster is ready:
# oc get storagecluster -n openshift-storage
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-external-storagecluster 2d7h Ready true 2025-02-04T03:40:03Z 4.16.6
For more details refer this docs :
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-red-hat-ceph-storage#deploy-openshift-data-foundation-using-red-hat-ceph-storage
Start deploy GKLM installation
Refer this link -------> Installing IBM Security Guardium Key Lifecycle Manager on Red Hat OpenShift
Summary
This document outlines a step-by-step process for deploying a highly available, three-node Ceph storage cluster. It covers best practices to ensure stability, performance, and scalability while integrating with a Red Hat OpenShift cluster. Additionally, it includes instructions for installing IBM Security Guardium Key Lifecycle Manager (SGKLM).
Reach out to us if you need further guidance.
Tamil Selvam R - tamilselvam.ramalingam@ibm.com