Cloud Pak for Data

Cloud Pak for Data

Come for answers. Stay for best practices. All we’re missing is you.

 View Only

Seamless Transitions: Cloud Pak for Data Storage Class Migration in Action

By Hongwei Jia posted Sun May 04, 2025 11:38 PM

  

Seamless Transitions: Cloud Pak for Data Storage Class Migration in Action

For Cloud Pak for Data clusters installed with incorrect storage classes, there is a potential risk of stability and performance issues. Reinstalling the cluster with the correct storage classes can be resource-intensive, especially for production environments with large data volumes.

Is there an alternative solution to address the problem of incorrect storage classes? The answer is yes. This article will showcase a practical approach to Cloud Pak for Data Storage Class Migration, enabling you to mitigate the issue without a full cluster re-installation.

Note:

Although this solution has been tested and validated across multiple environments, it is not an officially certified storage class migration method by IBM. Therefore, you would need to proceed at your own risk.


Migration context

Environment info

OCP: 4.14
CPD: 4.8.1
ODF: 4.14
Componenets: cpfs,cpd_platform,wkc,analyticsengine

Incorrect storage classes use case

When installing Cloud Pak for Data, the file storage class ocs-storagecluster-cephfs was used by the core database services (e.g. CoudhDb, ElasticSearch, RabbitMQ and Redis) of the Common Core Service component. The block storage class ocs-storagecluster-ceph-rbd should be used instead.

Here's the example about the persistent volumes which needs to be migrated from ocs-storagecluster-cephfs to ocs-storagecluster-ceph-rbd. 

database-storage-wdp-couchdb-0                    Bound   pvc-e80583ab-5616-4fab-b68a-bef3dd9efd29  45Gi      RWO           ocs-storagecluster-cephfs    6d23h
database-storage-wdp-couchdb-1                    Bound   pvc-f869e03c-143a-46d1-b17a-9f6ae23a25da  45Gi      RWO           ocs-storagecluster-cephfs    6d23h
database-storage-wdp-couchdb-2                    Bound   pvc-eaa3bb34-6eb8-40f7-b34d-6869b0967ca6  30Gi      RWO           ocs-storagecluster-cephfs    6d23h
data-elasticsea-0ac3-ib-6fb9-es-server-esnodes-0  Bound   pvc-7dd26fff-20d2-42ae-9343-86444703bb16  30Gi      RWO           ocs-storagecluster-cephfs    6d19h
data-elasticsea-0ac3-ib-6fb9-es-server-esnodes-1  Bound   pvc-c67ef48e-df38-45d5-b471-28ea5bae0bbb  30Gi      RWO           ocs-storagecluster-cephfs    6d19h
data-elasticsea-0ac3-ib-6fb9-es-server-esnodes-2  Bound   pvc-c9f51fcb-c3ba-426f-a9b3-5bc3da7d8069  30Gi      RWO           ocs-storagecluster-cephfs    6d19h
data-rabbitmq-ha-0                                Bound   pvc-90c2d3ee-14bf-41c9-9b93-2be66611be19  10Gi      RWO           ocs-storagecluster-cephfs    6d23h
data-rabbitmq-ha-1                                Bound   pvc-1702ddbc-a0b0-4485-9ac1-742842290e15  10Gi      RWO           ocs-storagecluster-cephfs    6d23h
data-rabbitmq-ha-2                                Bound   pvc-688398c4-e3e4-4b05-9631-db4b63f494ba  10Gi      RWO           ocs-storagecluster-cephfs    6d23h
data-redis-ha-server-0                            Bound   pvc-995cc4e1-4ee2-4faa-b56d-6c592fb1b6ae  10Gi      RWO           ocs-storagecluster-cephfs    6d23h
data-redis-ha-server-1                            Bound   pvc-5a698024-e4d4-4ca2-8a6d-965e89d1feb9  10Gi      RWO           ocs-storagecluster-cephfs    6d23h
data-redis-ha-server-2                            Bound   pvc-958ff62b-5ad1-464e-8f85-d416b73f32c5  10Gi      RWO           ocs-storagecluster-cephfs    6d23h

Migration in action

1 Pre-migration tasks

1.1 Have a cluster level backup

Backup your Cloud Pak for Data installation before you upgrade. For details, see Backing up and restoring Cloud Pak for Data (https://www.ibm.com/docs/en/SSQNUZ_4.8.x/cpd/admin/backup_restore.html).

1.2 Have backup for the statefulsets relevant to the PV migration

1.Create a backup dir.

mkdir -p /opt/ibm/cpd_pv_migration
export CPD_PV_MIGRATION_DIR=/opt/ibm/cpd_pv_migration

2.Bakup for the CCS CR.

export PROJECT_CPD_INST_OPERANDS=<your Cloud Pak for Data instance namespace>
oc get ccs -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/ccs-cr.yaml

2.Bakup for ElasticSearch.

oc get elasticsearchcluster elasticsearch-master -n ${PROJECT_CPD_INST_OPERANDS} -o yaml > ${CPD_PV_MIGRATION_DIR}/cr-elasticsearchcluster.yaml

oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep es-server-esnodes | awk '{print $1}'| xargs oc get sts -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/sts-es-server-esnodes-bak.yaml

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep es-server-esnodes | awk '{print $1}') ;do oc get pvc $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pvc-$p-bak.yaml;done

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep es-server-esnodes | awk '{print $3}') ;do oc get pv $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pv-$p-bak.yaml;done

3.Bakup for CouchDB.

oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep couch | awk '{print $1}'| xargs oc get sts -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/sts-wdp-couchdb-bak.yaml

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep couchdb | awk '{print $1}') ;do oc get pvc $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pvc-$p-bak.yaml;done

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep couchdb | awk '{print $3}') ;do oc get pv $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pv-$p-bak.yaml;done

4.Bakup for Redis.

oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server | awk '{print $1}'| xargs oc get sts -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/sts-redis-ha-server-bak.yaml

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server | awk '{print $1}') ;do oc get pvc $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pvc-$p-bak.yaml;done

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server | awk '{print $3}') ;do oc get pv $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pv-$p-bak.yaml;done

5.Bakup for Rabbitmq.

oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha | awk '{print $1}'| xargs oc get sts -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/sts-rabbitmq-ha-bak.yaml

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha | awk '{print $1}') ;do oc get pvc $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pvc-$p-bak.yaml;done

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha | awk '{print $3}') ;do oc get pv $p -o yaml -n ${PROJECT_CPD_INST_OPERANDS} > ${CPD_PV_MIGRATION_DIR}/pv-$p-bak.yaml;done

1.3 Mirror images (optional)

This is only required by the air-gapped environment.

1.3.1 Mirror the rhel-tools image
  • 1.Save the image in an internet connected machine.
podman pull registry.access.redhat.com/rhel7/rhel-tools:latest
podman save registry.access.redhat.com/rhel7/rhel-tools:latest -o rhel-tools.tar
  • 2.Copy the rhel-tools.tar file to the bastion node

  • 3.Push the rhel-tools.tar file to the private image registry.

podman load -i rhel-tools.tar

podman images | grep rhel-tools

podman login -u <username> -p <password> <target_registry> --tls-verify=false

podman tag 643870113757 <target_registry>/rhel7/rhel-tools:latest

podman push <target_registry>/rhel7/rhel-tools:latest --tls-verify=false

1.4 The permissions required for the upgrade is ready

It's recommended having the Openshift cluster administrator permissions ready for this migration.

1.5 A health check is made to ensure the cluster's readiness for upgrade.

The OpenShift cluster, persistent storage and Cloud Pak for Data platform and services are in healthy status.

  • 1.Check OCP status


Log into OCP and run below command.

oc get co

Make sure all the cluster operators in AVAILABLE status. And not in PROGRESSING or DEGRADED status.



Run this command and make sure all nodes in Ready status.

oc get nodes

Run this command and make sure all the machine configure pool are in healthy status.

oc get mcp
  • 2.Check Cloud Pak for Data status

Log onto bastion node, run this command in terminal and make sure the Lite and all the services' status are in Ready status.

cpd-cli manage login-to-ocp \
--username=${OCP_USERNAME} \
--password=${OCP_PASSWORD} \
--server=${OCP_URL}
cpd-cli manage get-cr-status --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}

Run this command and make sure all pods healthy.

oc get po --no-headers --all-namespaces -o wide | grep -Ev '([[:digit:]])/\1.*R' | grep -v 'Completed'
  • 3.Check ODF status


Make sure the ODF cluster status is healthy and also **with enough capaciy**.

oc describe cephcluster ocs-storagecluster-cephcluster -n openshift-storage
  • 4.Check the ElasticSearch snapshot repository


Makes sure that there already exists a snapshot repository that we can use to take a new snapshot. Normally it should be initialized when the cluster is first created, but it helps to verify:

oc exec  elasticsea-0ac3-ib-6fb9-es-server-esnodes-0 -c elasticsearch -- curl --request GET --url http://localhost:19200/_cat/snapshots/cloudpak --header 'content-type: application/json'

1.6 Schedule a maintenance time window

This migration work requires down time. Send a heads-up to all end-users before this migration.

And it's recommended disabling the Cloud Pak for Data route just before starting the storage class migration in a production cluster. So that it can protect the storage class migration from the interruption.

Have a backup of the CPD route and place it in a safe place.

oc get route -n ${PROJECT_CPD_INST_OPERANDS} -o yaml > cpd_routes.yaml

Delete the CPD route

oc delete route -n ${PROJECT_CPD_INST_OPERANDS} -o yaml > cpd_routes.yaml

2 Migration

Note:

These migration steps need to be validated carefully in a testing cluster before applying it in a production one. Down time is expected during this migration.

2.1.Put CCS into maintenance mode

Put CCS into maintenance mode for preventing the migration work from being impacted by the operator reconciliation.

oc patch ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}' -n ${PROJECT_CPD_INST_OPERANDS}

Make sure the CCS put into the maintenance mode successfully.

oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}

2.2.Change the ReclaimPolicy to be "Retain" for the existing PVs (the ones with the wrong SC ocs-storagecluster-cephfs)

1.Patch the CouchDB PVs.

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep couchdb | awk '{print $3}') ;do oc patch pv $p -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS};done

Make sure the ReclaimPolicy of the CouchDB PVs are changed to be "Retain".

oc get pv | grep -i couchdb
pvc-05493166-c0e2-4b67-b683-277ee23f51d6   30Gi       RWO            Retain           Bound    cpd/database-storage-wdp-couchdb-2                     ocs-storagecluster-cephfs              17h
pvc-2fb1d306-1d39-4860-acb4-f04bcbd48dea   30Gi       RWO            Retain           Bound    cpd/database-storage-wdp-couchdb-0                     ocs-storagecluster-cephfs              17h
pvc-6cc51d6e-d882-4abd-b50d-9c8d4dbf6276   30Gi       RWO            Retain           Bound    cpd/database-storage-wdp-couchdb-1                     ocs-storagecluster-cephfs              17h

2.Patch the Redis PVs.

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server | awk '{print $3}') ;do oc patch pv $p -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS};done

Make sure the ReclaimPolicy of the Redis PVs are changed to be "Retain".

oc get pv | grep -i redis
pvc-968033b4-0c99-4bc6-a91f-4b80948dcccf   10Gi       RWO            Retain           Bound    cpd/data-redis-ha-server-1                             ocs-storagecluster-cephfs              17h
pvc-d5869572-5dc5-4e1a-bc28-d94202ba7644   10Gi       RWO            Retain           Bound    cpd/data-redis-ha-server-2                             ocs-storagecluster-cephfs              17h
pvc-d6388a4a-6380-4be3-a94a-c111e0533d66   10Gi       RWO            Retain           Bound    cpd/data-redis-ha-server-0                             ocs-storagecluster-cephfs              17h

3.Patch the Rabbitmq PVs.

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha | awk '{print $3}') ;do oc patch pv $p -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS};done

Make sure the ReclaimPolicy of the Rabbitmq PVs are changed to be "Retain".

oc get pv | grep -i rabbitmq
pvc-02f9546c-d6b5-49a2-8530-890f0ab8908a   10Gi       RWO            Retain           Bound    cpd/data-rabbitmq-ha-0                                 ocs-storagecluster-cephfs              17h
pvc-58eac11a-0da2-4ca8-a0b3-841cacc0a6ad   10Gi       RWO            Retain           Bound    cpd/data-rabbitmq-ha-1                                 ocs-storagecluster-cephfs              17h
pvc-d56efba2-e8b3-4f84-a10d-24b752ab1dea   10Gi       RWO            Retain           Bound    cpd/data-rabbitmq-ha-2                                 ocs-storagecluster-cephfs              17h

2.3.Migrate for ElasticSearch

Reference: https://github.ibm.com/wdp-gov/global-search-wiki/wiki/Migrate-ES-between-storage-types

2.4 Migration for CouchDB

2.4.1 Preparation
  • Get old PVC name and volume name.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep -i wdp-couchdb

Sample output looks like this:

NAME                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS     AGE
database-storage-wdp-couchdb-0                     Bound    pvc-e8eed9f7-7bb6-4b5d-ab8d-21d49c4ecf35   30Gi       RWO            ocs-storagecluster-cephfs   89d
database-storage-wdp-couchdb-1                     Bound    pvc-14d811c3-b4d8-42c5-b6d9-1c3a44c25534   30Gi       RWO            ocs-storagecluster-cephfs   89d
database-storage-wdp-couchdb-2                     Bound    pvc-fb73ce1c-36b0-4358-a66d-9142bf0ce7b7   30Gi       RWO            ocs-storagecluster-cephfs   89d
  • Note the mount path of the data volume /opt/couchdb/data by checking the volumeMounts definition in wdp-couchdb sts yaml file.
          volumeMounts:
            - name: database-storage
              mountPath: /opt/couchdb/data
  • Make sure that the replicas of wdp-couchdb sts has been scaled down to zero.
oc scale sts wdp-couchdb -n ${PROJECT_CPD_INST_OPERANDS} --replicas=0
oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep -i wdp-couchdb
2.4.2 Start a new temporary deployment using the rhel-tools image
oc -n ${PROJECT_CPD_INST_OPERANDS} create deployment sleep --image=registry.access.redhat.com/rhel7/rhel-tools -- tail -f /dev/null
2.4.3 Migration for the database-storage-wdp-couchdb-0 pvc
  • Create a new PVC by referencing the database-storage-wdp-couchdb-0 pvc.
oc get pvc database-storage-wdp-couchdb-0 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-database-storage-wdp-couchdb-0-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC

tmp=$(mktemp)
jq '.metadata.name = "database-storage-wdp-couchdb-0-new"' pvc-database-storage-wdp-couchdb-0-new.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-0-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-database-storage-wdp-couchdb-0-new.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-0-new.json

Create the new PVC.

oc apply -f pvc-database-storage-wdp-couchdb-0-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep database-storage-wdp-couchdb-0-new

Scale down the sleep deployment

oc scale deploy sleep --replicas=0

Wait until it's scaled down .

  • Mount the old database-storage-wdp-couchdb-0 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=database-storage-wdp-couchdb-0 --mount-path=/old-claim
  • Mount the new database-storage-wdp-couchdb-0-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=database-storage-wdp-couchdb-0-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$ cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
_dbs.couch                : 1
_nodes.couch              : 1
search_indexes            : 1749
shards                    : 297

sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
_dbs.couch                : 1
_nodes.couch              : 1
search_indexes            : 1749
shards                    : 297
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the database-storage-wdp-couchdb-0-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep wdp-couchdb-0-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}

Make sure the ReclaimPolicy was change to be "Retain" successfully.

oc get pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep wdp-couchdb-0-new | awk '{print $3}')
  • Recreate original database-storage-wdp-couchdb-0 PVC


The database-storage-wdp-couchdb-0 PVC points to new PV created earlier by the database-storage-wdp-couchdb-0-new PVC we created and copied the data to.

Get the volume name created by the database-storage-wdp-couchdb-0-new PVC.

export PV_NAME_WDP_COUCHDB_0=$(oc get pvc database-storage-wdp-couchdb-0-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new database-storage-wdp-couchdb-0 PVC.

oc get pvc database-storage-wdp-couchdb-0 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-database-storage-wdp-couchdb-0-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-database-storage-wdp-couchdb-0-recreate.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-0-recreate.json

Refer to the new PV.

jq --arg PV_NAME_WDP_COUCHDB_0 "$PV_NAME_WDP_COUCHDB_0" '.spec.volumeName = $PV_NAME_WDP_COUCHDB_0' pvc-database-storage-wdp-couchdb-0-recreate.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-0-recreate.json

Remove the old and new PVCs for wdp-couchdb-0

oc delete pvc database-storage-wdp-couchdb-0-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc database-storage-wdp-couchdb-0 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_WDP_COUCHDB_0 -p '{"spec":{"claimRef": null}}'

Recreate the database-storage-wdp-couchdb-0 PVC.

oc apply -f pvc-database-storage-wdp-couchdb-0-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep database-storage-wdp-couchdb-0
2.4.4 Migration for the database-storage-wdp-couchdb-1 pvc
  • Create a new PVC by referencing the database-storage-wdp-couchdb-1 pvc.
oc get pvc database-storage-wdp-couchdb-1 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-database-storage-wdp-couchdb-1-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "database-storage-wdp-couchdb-1-new"' pvc-database-storage-wdp-couchdb-1-new.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-1-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-database-storage-wdp-couchdb-1-new.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-1-new.json
oc apply -f pvc-database-storage-wdp-couchdb-1-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep database-storage-wdp-couchdb-1-new

Scale down the sleep deployment

oc scale deploy sleep --replicas=0

Wait until it's scaled down .

  • Mount the old database-storage-wdp-couchdb-1 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=database-storage-wdp-couchdb-1 --mount-path=/old-claim
  • Mount the new database-storage-wdp-couchdb-1-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=database-storage-wdp-couchdb-1-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$ cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
_dbs.couch                : 1
_nodes.couch              : 1
search_indexes            : 1749
shards                    : 297

sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
_dbs.couch                : 1
_nodes.couch              : 1
search_indexes            : 1749
shards                    : 297
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the database-storage-wdp-couchdb-1-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep wdp-couchdb-1-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original database-storage-wdp-couchdb-1 PVC


The database-storage-wdp-couchdb-1 PVC points to new PV created earlier by the database-storage-wdp-couchdb-1-new PVC we created and copied the data to.

Get the volume name created by the database-storage-wdp-couchdb-1-new PVC.

export PV_NAME_WDP_COUCHDB_1=$(oc get pvc database-storage-wdp-couchdb-1-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new database-storage-wdp-couchdb-1 PVC.

oc get pvc database-storage-wdp-couchdb-1 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-database-storage-wdp-couchdb-1-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-database-storage-wdp-couchdb-1-recreate.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-1-recreate.json

Refer to the new PV.

jq --arg PV_NAME_WDP_COUCHDB_1 "$PV_NAME_WDP_COUCHDB_1" '.spec.volumeName = $PV_NAME_WDP_COUCHDB_1' pvc-database-storage-wdp-couchdb-1-recreate.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-1-recreate.json

Remove the old and new PVCs for wdp-couchdb-1

oc delete pvc database-storage-wdp-couchdb-1-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc database-storage-wdp-couchdb-1 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_WDP_COUCHDB_1 -p '{"spec":{"claimRef": null}}'

Recreate the database-storage-wdp-couchdb-1 PVC.

oc apply -f pvc-database-storage-wdp-couchdb-1-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep database-storage-wdp-couchdb-1
2.4.5 Migration for the database-storage-wdp-couchdb-2 pvc
  • Create a new PVC by referencing the database-storage-wdp-couchdb-2 pvc.
oc get pvc database-storage-wdp-couchdb-2 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-database-storage-wdp-couchdb-2-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "database-storage-wdp-couchdb-2-new"' pvc-database-storage-wdp-couchdb-2-new.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-2-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-database-storage-wdp-couchdb-2-new.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-2-new.json

Create the wdp-couchdb-2-new pvc.

oc apply -f pvc-database-storage-wdp-couchdb-2-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep database-storage-wdp-couchdb-2-new

Scale down the sleep deployment

oc scale deploy sleep --replicas=0

Wait until it's scaled down .

  • Mount the old database-storage-wdp-couchdb-2 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=database-storage-wdp-couchdb-2 --mount-path=/old-claim
  • Mount the new database-storage-wdp-couchdb-2-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=database-storage-wdp-couchdb-2-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$ cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
_dbs.couch                : 1
_nodes.couch              : 1
search_indexes            : 1749
shards                    : 297

sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
_dbs.couch                : 1
_nodes.couch              : 1
search_indexes            : 1749
shards                    : 297
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the database-storage-wdp-couchdb-2-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep wdp-couchdb-2-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original database-storage-wdp-couchdb-2 PVC


The database-storage-wdp-couchdb-2 PVC points to new PV created earlier by the database-storage-wdp-couchdb-2-new PVC we created and copied the data to.

Get the volume name created by the database-storage-wdp-couchdb-2-new PVC.

export PV_NAME_WDP_COUCHDB_2=$(oc get pvc database-storage-wdp-couchdb-2-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new database-storage-wdp-couchdb-2 PVC.

oc get pvc database-storage-wdp-couchdb-2 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-database-storage-wdp-couchdb-2-recreate.json

Change the storage class to be ocs-storagecluster-ceph-rbd.

tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-database-storage-wdp-couchdb-2-recreate.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-2-recreate.json

Refer to the new PV.

jq --arg PV_NAME_WDP_COUCHDB_2 "$PV_NAME_WDP_COUCHDB_2" '.spec.volumeName = $PV_NAME_WDP_COUCHDB_2' pvc-database-storage-wdp-couchdb-2-recreate.json > "$tmp" && mv -f "$tmp" pvc-database-storage-wdp-couchdb-2-recreate.json

Remove the old and new PVCs for wdp-couchdb-2

oc delete pvc database-storage-wdp-couchdb-2-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc database-storage-wdp-couchdb-2 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_WDP_COUCHDB_2 -p '{"spec":{"claimRef": null}}'

Recreate the database-storage-wdp-couchdb-2 PVC.

oc apply -f pvc-database-storage-wdp-couchdb-2-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep database-storage-wdp-couchdb-2
2.4.6 Scale the wdp-couchdb statefulset back
oc scale sts wdp-couchdb --replicas=3 -n ${PROJECT_CPD_INST_OPERANDS}

2.5 Migration for Redis

2.5.1 Preparation
  • Get old PVC name and volume name.
oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep -i redis-ha-server

Sample output looks like this:

NAME                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS     AGE
data-redis-ha-server-0                             Bound    pvc-227f0fe0-7c81-47c8-aa6c-1ee2258351f6   10Gi       RWO            ocs-storagecluster-cephfs   90d
data-redis-ha-server-1                             Bound    pvc-5357a48c-0b3e-40b5-9c80-c8fe6492a5f0   10Gi       RWO            ocs-storagecluster-cephfs   90d
data-redis-ha-server-2                             Bound    pvc-08d70302-c02d-4d6c-8685-c8e2db36f160   10Gi       RWO            ocs-storagecluster-cephfs   90d
  • Note the mount path of the data volume /data by checking the volumeMounts definition in redis-ha-server sts yaml file.
      volumeMounts:
        - mountPath: /data
          name: data
  • Make sure that the replicas of redis-ha-server stst has been scaled down to zero.
oc scale sts redis-ha-server -n ${PROJECT_CPD_INST_OPERANDS} --replicas=0
oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep -i redis-ha-server
2.5.2 Migration for the data-redis-ha-server-0 pvc
  • Create a new PVC by referencing the data-redis-ha-server-0 pvc.
oc get pvc data-redis-ha-server-0 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-redis-ha-server-0-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "data-redis-ha-server-0-new"' pvc-data-redis-ha-server-0-new.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-0-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-redis-ha-server-0-new.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-0-new.json

oc apply -f pvc-data-redis-ha-server-0-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-redis-ha-server-0-new

Scale down the sleep deployment

oc scale deploy sleep --replicas=0

Wait until it's scaled down .

  • Mount the old data-redis-ha-server-0 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-redis-ha-server-0 --mount-path=/old-claim
  • Mount the new data-redis-ha-server-0-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-redis-ha-server-0-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$ cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
conf                      : 3
dump.rdb                  : 1
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
conf                      : 3
dump.rdb                  : 1
lost+found                : 1
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-redis-ha-server-0-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server-0-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-redis-ha-server-0 PVC


The data-redis-ha-server-0 PVC points to new PV created earlier by the data-redis-ha-server-0-new PVC we created and copied the data to.

Get the volume name created by the data-redis-ha-server-0-new PVC.

export PV_NAME_REDIS_0=$(oc get pvc data-redis-ha-server-0-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-redis-ha-server-0 PVC.

oc get pvc data-redis-ha-server-0 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-redis-ha-server-0-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-redis-ha-server-0-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-0-recreate.json

Refer to the new PV.

jq --arg PV_NAME_REDIS_0 "$PV_NAME_REDIS_0" '.spec.volumeName = $PV_NAME_REDIS_0' pvc-data-redis-ha-server-0-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-0-recreate.json

Remove the old and new PVCs for redis-ha-server-0

oc delete pvc data-redis-ha-server-0-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-redis-ha-server-0 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_REDIS_0 -p '{"spec":{"claimRef": null}}'

Recreate the data-redis-ha-server-0 PVC.

oc apply -f pvc-data-redis-ha-server-0-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-redis-ha-server-0
2.5.3 Migration for the data-redis-ha-server-1 pvc
  • Create a new PVC by referencing the data-redis-ha-server-1 pvc.
oc get pvc data-redis-ha-server-1 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-redis-ha-server-1-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "data-redis-ha-server-1-new"' pvc-data-redis-ha-server-1-new.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-1-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-redis-ha-server-1-new.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-1-new.json

oc apply -f pvc-data-redis-ha-server-1-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-redis-ha-server-1-new

Scale down the sleep deployment

oc scale deploy sleep --replicas=0

Wait until it's scaled down .

  • Mount the old data-redis-ha-server-1 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-redis-ha-server-1 --mount-path=/old-claim
  • Mount the new data-redis-ha-server-1-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-redis-ha-server-1-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$ cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
conf                      : 3
dump.rdb                  : 1
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
conf                      : 3
dump.rdb                  : 1
lost+found                : 1
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-redis-ha-server-1-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server-1-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-redis-ha-server-1 PVC


The data-redis-ha-server-1 PVC points to new PV created earlier by the data-redis-ha-server-1-new PVC we created and copied the data to.

Get the volume name created by the data-redis-ha-server-1-new PVC.

export PV_NAME_REDIS_1=$(oc get pvc data-redis-ha-server-1-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-redis-ha-server-1 PVC.

oc get pvc data-redis-ha-server-1 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-redis-ha-server-1-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-redis-ha-server-1-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-1-recreate.json

Refer to the new PV.

jq --arg PV_NAME_REDIS_1 "$PV_NAME_REDIS_1" '.spec.volumeName = $PV_NAME_REDIS_1' pvc-data-redis-ha-server-1-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-1-recreate.json

Remove the old and new PVCs for redis-ha-server-1

oc delete pvc data-redis-ha-server-1-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-redis-ha-server-1 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_REDIS_1 -p '{"spec":{"claimRef": null}}'

Recreate the data-redis-ha-server-1 PVC.

oc apply -f pvc-data-redis-ha-server-1-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-redis-ha-server-1
2.5.4 Migration for the data-redis-ha-server-2 pvc
  • Create a new PVC by referencing the database-storage-wdp-couchdb-2 pvc.
oc get pvc data-redis-ha-server-2 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-redis-ha-server-2-new.json
tmp=$(mktemp)

jq '.metadata.name = "data-redis-ha-server-2-new"' pvc-data-redis-ha-server-2-new.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-2-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-redis-ha-server-2-new.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-2-new.json

oc apply -f pvc-data-redis-ha-server-2-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-redis-ha-server-2-new

Scale down the sleep deployment

oc scale deploy sleep --replicas=0

Wait until it's scaled down .

  • Mount the old data-redis-ha-server-2 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-redis-ha-server-2 --mount-path=/old-claim
  • Mount the new data-redis-ha-server-2-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-redis-ha-server-2-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$ cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
conf                      : 3
dump.rdb                  : 1
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
conf                      : 3
dump.rdb                  : 1
lost+found                : 1
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-redis-ha-server-2-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server-2-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-redis-ha-server-2 PVC


The data-redis-ha-server-2 PVC points to new PV created earlier by the data-redis-ha-server-2-new PVC we created and copied the data to.

Get the volume name created by the data-redis-ha-server-2-new PVC.

export PV_NAME_REDIS_2=$(oc get pvc data-redis-ha-server-2-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-redis-ha-server-2 PVC.

oc get pvc data-redis-ha-server-2 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-redis-ha-server-2-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-redis-ha-server-2-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-2-recreate.json

Refer to the new PV.

jq --arg PV_NAME_REDIS_2 "$PV_NAME_REDIS_2" '.spec.volumeName = $PV_NAME_REDIS_2' pvc-data-redis-ha-server-2-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-redis-ha-server-2-recreate.json

Remove the old and new PVCs for redis-ha-server-2

oc delete pvc data-redis-ha-server-2-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-redis-ha-server-2 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_REDIS_2 -p '{"spec":{"claimRef": null}}'

Recreate the data-redis-ha-server-2 PVC.

oc apply -f pvc-data-redis-ha-server-2-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-redis-ha-server-2
2.5.5 Scale the redis-ha-server statefulset back
oc scale sts redis-ha-server --replicas=3 -n ${PROJECT_CPD_INST_OPERANDS}

2.6 Migration for Rabbitmq

2.6.1 Preparation
  • Get old PVC name and volume name.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep -i rabbitmq-ha

Sample output looks like this:

NAME                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS     AGE
data-rabbitmq-ha-0                                 Bound    pvc-f1c63139-b65e-4474-b2b5-25c2a53955fb   10Gi       RWO            ocs-storagecluster-cephfs   89d
data-rabbitmq-ha-1                                 Bound    pvc-4bd167e4-7df0-418c-a0bb-ce99eb04dc61   10Gi       RWO            ocs-storagecluster-cephfs   89d
data-rabbitmq-ha-2                                 Bound    pvc-85868288-5963-4703-840b-f78f033f34ce   10Gi       RWO            ocs-storagecluster-cephfs   89d
  • Note the mount path of the data volume /var/lib/rabbitmq by checking the volumeMounts definition in rabbitmq-ha sts yaml file.
        volumeMounts:
        - mountPath: /var/lib/rabbitmq
          name: data
  • Make sure that the replicas of rabbitmq-ha stst has been scaled down to zero.
oc scale sts rabbitmq-ha --replicas=0 -n ${PROJECT_CPD_INST_OPERANDS}
oc get sts -n ${PROJECT_CPD_INST_OPERANDS} | grep -i rabbitmq-ha
2.6.2 Migration for the data-rabbitmq-ha-0 pvc
  • Create a new PVC by referencing the data-rabbitmq-ha-0 pvc.
oc get pvc data-rabbitmq-ha-0 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-0-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "data-rabbitmq-ha-0-new"' pvc-data-rabbitmq-ha-0-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-0-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-0-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-0-new.json

oc apply -f pvc-data-rabbitmq-ha-0-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-0-new
  • Scale down the sleep deployment
oc scale deploy sleep --replicas=0

Wait until it's scaled down .

oc get deploy sleep
  • Mount the old data-rabbitmq-ha-0 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-rabbitmq-ha-0 --mount-path=/old-claim
  • Mount the new data-rabbitmq-ha-0-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-rabbitmq-ha-0-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$  cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
mnesia                    : 64
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
lost+found                : 1
mnesia                    : 64
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-rabbitmq-ha-0-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha-0-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-rabbitmq-ha-0 PVC


The data-rabbitmq-ha-0 PVC points to new PV created earlier by the data-rabbitmq-ha-0-new PVC we created and copied the data to.

Get the volume name created by the data-rabbitmq-ha-0-new PVC.

export PV_NAME_RABBITMQ_0=$(oc get pvc data-rabbitmq-ha-0-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-rabbitmq-ha-0 PVC.

oc get pvc data-rabbitmq-ha-0 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-0-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-0-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-0-recreate.json

Refer to the new PV.

jq --arg PV_NAME_RABBITMQ_0 "$PV_NAME_RABBITMQ_0" '.spec.volumeName = $PV_NAME_RABBITMQ_0' pvc-data-rabbitmq-ha-0-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-0-recreate.json

Remove the old and new PVCs for rabbitmq-ha-0

oc delete pvc data-rabbitmq-ha-0-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-rabbitmq-ha-0 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_RABBITMQ_0 -p '{"spec":{"claimRef": null}}'

Recreate the data-rabbitmq-ha-0 PVC.

oc apply -f pvc-data-rabbitmq-ha-0-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-0
2.6.3 Migration for the data-rabbitmq-ha-1 pvc
  • Create a new PVC by referencing the data-rabbitmq-ha-1 pvc.
oc get pvc data-rabbitmq-ha-1 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-1-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "data-rabbitmq-ha-1-new"' pvc-data-rabbitmq-ha-1-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-1-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-1-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-1-new.json

oc apply -f pvc-data-rabbitmq-ha-1-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-1-new
  • Scale down the sleep deployment
oc scale deploy sleep --replicas=0

Wait until it's scaled down.

oc get deploy sleep
  • Mount the old data-rabbitmq-ha-1 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-rabbitmq-ha-1 --mount-path=/old-claim
  • Mount the new data-rabbitmq-ha-1-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-rabbitmq-ha-1-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


Ensure the number of files between the old-claim and the new-claim folder is same. For example:

sh-4.2$  cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
mnesia                    : 64
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
lost+found                : 1
mnesia                    : 64
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-rabbitmq-ha-1-new PVC for chaning the ReclaimPolicy to be "Retain"
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha-1-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-rabbitmq-ha-1 PVC


The data-rabbitmq-ha-1 PVC points to new PV created earlier by the data-rabbitmq-ha-1-new PVC we created and copied the data to.

Get the volume name created by the data-rabbitmq-ha-1-new PVC.

export PV_NAME_RABBITMQ_1=$(oc get pvc data-rabbitmq-ha-1-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-rabbitmq-ha-1 PVC.

oc get pvc data-rabbitmq-ha-1 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-1-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-1-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-1-recreate.json

Refer to the new PV.

jq --arg PV_NAME_RABBITMQ_1 "$PV_NAME_RABBITMQ_1" '.spec.volumeName = $PV_NAME_RABBITMQ_1' pvc-data-rabbitmq-ha-1-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-1-recreate.json

Remove the old and new PVCs for rabbitmq-ha-1

oc delete pvc data-rabbitmq-ha-1-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-rabbitmq-ha-1 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_RABBITMQ_1 -p '{"spec":{"claimRef": null}}'

Recreate the data-rabbitmq-ha-1 PVC.

oc apply -f pvc-data-rabbitmq-ha-1-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-1
2.6.4 Migration for the data-rabbitmq-ha-2 pvc
  • Create a new PVC by referencing the data-rabbitmq-ha-2 pvc.
oc get pvc data-rabbitmq-ha-2 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-2-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "data-rabbitmq-ha-2-new"' pvc-data-rabbitmq-ha-2-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-2-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-2-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-2-new.json

oc apply -f pvc-data-rabbitmq-ha-2-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-2-new
  • Scale down the sleep deployment
oc scale deploy sleep --replicas=0

Wait until it's scaled down.

oc get deploy sleep
  • Mount the old data-rabbitmq-ha-2 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-rabbitmq-ha-2 --mount-path=/old-claim

Wait until the pod is up and running.

oc get pods | grep -i sleep
  • Mount the new data-rabbitmq-ha-2-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-rabbitmq-ha-2-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


For example:

sh-4.2$  cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
mnesia                    : 69
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
lost+found                : 1
mnesia                    : 69
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-rabbitmq-ha-2-new PVC for chaning the ReclaimPolicy to be Retain
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-2-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-rabbitmq-ha-2 PVC


The data-rabbitmq-ha-2 PVC points to new PV created earlier by the data-rabbitmq-ha-2-new PVC we created and copied the data to.

Get the volume name created by the data-rabbitmq-ha-2-new PVC.

export PV_NAME_RABBITMQ_2=$(oc get pvc data-rabbitmq-ha-2-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-rabbitmq-ha-2 PVC.

oc get pvc data-rabbitmq-ha-2 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-2-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-2-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-2-recreate.json

Refer to the new PV.

jq --arg PV_NAME_RABBITMQ_2 "$PV_NAME_RABBITMQ_2" '.spec.volumeName = $PV_NAME_RABBITMQ_2' pvc-data-rabbitmq-ha-2-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-2-recreate.json

Remove the old and new PVCs for rabbitmq-ha-2

oc delete pvc data-rabbitmq-ha-2-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-rabbitmq-ha-2 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_RABBITMQ_2 -p '{"spec":{"claimRef": null}}'

Recreate the data-rabbitmq-ha-2 PVC.

oc apply -f pvc-data-rabbitmq-ha-2-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-2
2.6.5 Migration for the data-rabbitmq-ha-3 pvc
  • Create a new PVC by referencing the data-rabbitmq-ha-3 pvc.
oc get pvc data-rabbitmq-ha-3 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-3-new.json

Specify a new name and the right storage class (ocs-storagecluster-ceph-rbd) for the new PVC.

tmp=$(mktemp)

jq '.metadata.name = "data-rabbitmq-ha-3-new"' pvc-data-rabbitmq-ha-3-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-3-new.json

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-3-new.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-3-new.json

oc apply -f pvc-data-rabbitmq-ha-3-new.json

Make sure the new PVC is created successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-3-new
  • Scale down the sleep deployment
oc scale deploy sleep --replicas=0

Wait until it's scaled down.

oc get deploy sleep
  • Mount the old data-rabbitmq-ha-3 PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=old-claim --claim-name=data-rabbitmq-ha-3 --mount-path=/old-claim
  • Mount the new data-rabbitmq-ha-3-new PVC to the sleep pod
oc set volume deployment/sleep --add -t pvc --name=new-claim --claim-name=data-rabbitmq-ha-3-new --mount-path=/new-claim
  • Scale back the sleep deployment
oc scale deploy sleep --replicas=1

Wait until the sleep pod is up and running.

oc get pods | grep -i sleep
  • rsh into the sleep pod
oc rsh $(oc get pod | grep sleep | awk '{print $1}')
  • Migrate data to the new storage:
rsync -avxHAX --progress /old-claim/* /new-claim

Note: Make sure the termial session will not be closed or expired during this step.

  • Validate the migration


For example:

sh-4.2$  cd old-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
mnesia                    : 69
sh-4.2$ cd ../new-claim/
sh-4.2$ ls | while read dir; do printf "%-25.45s : " "$dir"; ls -R "$dir" | sed '/^[[:space:]]*$/d' | wc -l; done
lost+found                : 1
mnesia                    : 69
  • Remove the volume mounts from the sleep deployment
oc set volume deployment sleep --remove --confirm
  • Patch the PV of the data-rabbitmq-ha-3-new PVC for chaning the ReclaimPolicy to be Retain
oc patch pv $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-3-new | awk '{print $3}') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -n ${PROJECT_CPD_INST_OPERANDS}
  • Recreate original data-rabbitmq-ha-3 PVC


The data-rabbitmq-ha-3 PVC points to new PV created earlier by the data-rabbitmq-ha-3-new PVC we created and copied the data to.

Get the volume name created by the data-rabbitmq-ha-3-new PVC.

export PV_NAME_RABBITMQ_3=$(oc get pvc data-rabbitmq-ha-3-new --output jsonpath={.spec.volumeName} -n ${PROJECT_CPD_INST_OPERANDS})

Create the yaml file of the new data-rabbitmq-ha-3 PVC.

oc get pvc data-rabbitmq-ha-3 -o json | jq 'del(.status)'| jq 'del(.metadata.annotations)' | jq 'del(.metadata.creationTimestamp)'|jq 'del(.metadata.resourceVersion)'|jq 'del(.metadata.uid)'| jq 'del(.spec.volumeName)' > pvc-data-rabbitmq-ha-3-recreate.json
tmp=$(mktemp)

jq '.spec.storageClassName = "ocs-storagecluster-ceph-rbd"' pvc-data-rabbitmq-ha-3-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-3-recreate.json

Refer to the new PV.

jq --arg PV_NAME_RABBITMQ_3 "$PV_NAME_RABBITMQ_3" '.spec.volumeName = $PV_NAME_RABBITMQ_3' pvc-data-rabbitmq-ha-3-recreate.json > "$tmp" && mv -f "$tmp" pvc-data-rabbitmq-ha-3-recreate.json

Remove the old and new PVCs for rabbitmq-ha-3

oc delete pvc data-rabbitmq-ha-3-new -n ${PROJECT_CPD_INST_OPERANDS}

oc delete pvc data-rabbitmq-ha-3 -n ${PROJECT_CPD_INST_OPERANDS}

Remove the claimRef section from the new PV.

oc patch pv $PV_NAME_RABBITMQ_3 -p '{"spec":{"claimRef": null}}'

Recreate the data-rabbitmq-ha-3 PVC.

oc apply -f pvc-data-rabbitmq-ha-3-recreate.json

Make sure the new PVC is created and bound successfully.

oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep data-rabbitmq-ha-3
2.6.6 Scale the rabbitmq-ha statefulset back
oc scale sts rabbitmq-ha --replicas=4 -n ${PROJECT_CPD_INST_OPERANDS}

2.7.Change the ReclaimPolicy back to be "Delete" for the PVs

1.Patch the CouchDB PVs.

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep couchdb | awk '{print $3}') ;do oc patch pv $p -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' -n ${PROJECT_CPD_INST_OPERANDS};done

2.Patch the Redis PVs.

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep redis-ha-server | awk '{print $3}') ;do oc patch pv $p -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' -n ${PROJECT_CPD_INST_OPERANDS};done

3.Patch the Rabbitmq PVs.

for p in $(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} | grep rabbitmq-ha | awk '{print $3}') ;do oc patch pv $p -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' -n ${PROJECT_CPD_INST_OPERANDS};done

2.8.Make sure the correct storage type is specified in CCS cr and OpenSearch cr

oc patch ccs ccs-cr --type merge --patch '{"spec": {"blockStorageClass": "ocs-storagecluster-ceph-rbd", "fileStorageClass": "ocs-storagecluster-cephfs"}}' -n ${PROJECT_CPD_INST_OPERANDS}
oc get ccs ccs-cr -oyaml
oc get elasticsearchcluster elasticsearch-master -oyaml

2.9.Get the CCS cr out of the maintenance mode

Get the CCS cr out of the maintenance mode to trigger the operator reconcilation.

oc patch ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}' -n ${PROJECT_CPD_INST_OPERANDS}

2.10 Validation

  • Make sure the CCS custom resource is in 'Completed' status and also with the right storage classes.
oc get CCS cr -n ${PROJECT_CPD_INST_OPERANDS}
  • Make sure all the services are in 'Completed' status.


Run the cpd-cli manage login-to-ocp command to log in to the cluster.

cpd-cli manage login-to-ocp \
--username=${OCP_USERNAME} \
--password=${OCP_PASSWORD} \
--server=${OCP_URL}

Get all services' status.

cpd-cli manage get-cr-status --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
  • Make sure the migration relevant pods are up and running.
oc get pods -n ${PROJECT_CPD_INST_OPERANDS}| grep -E "es-server-esnodes|wdp-couchdb|redis-ha-server|rabbitmq-ha"
  • Make sure the migration relevant PVC are in 'Bound' status and also with the right storage classes.
oc get pvc -n ${PROJECT_CPD_INST_OPERANDS}| grep -E "es-server-esnodes|wdp-couchdb|redis-ha-server|rabbitmq-ha"
  • Conduct user acceptance tests


Comprehensive tests should be done by end-users. It includes accessing existing data and creating new data using multiple end-users.

2.11 Post-migration

  • Recreate the CPD route with the backup yaml file in step 1.6 if it was deleted before the migration.

  • Clean up

oc -n ${PROJECT_CPD_INST_OPERANDS} delete deployment sleep 

Reference

0 comments
1 view

Permalink