Instana

Instana

The community for performance and observability professionals to learn, to share ideas, and to connect with others.

 View Only

Disable selinux relabeling at ODF cephfs volume and copy raw-span data from old to new pv 

Tue January 27, 2026 02:16 PM

Introduction

The instruction is about fixing selinux relabeling with ODF on OCP cluster by creating a separate storageclass with kernelMountOptions following RedHat article: Workarounds to skip SELinux relabelling in Openshift Data Foundation / Openshift Container Storage. After creating the storageclass the optional step is to copy data from old raw-spans persistent volume to a new one. We will create an additional pod which will attach old and new volumes at the same time and perform copy from one to another. As soon as copy command may be running too long (depending of load and network connection between the host where the commands will be executed) it is recommended to run oc commands from bastion node and in tmux or screen session. The article gives an example with tmux.

Here are few options which usage might be useful:

  • if tmux is not installed, please install it with:
dnf install tmux

  • start tmux session with name "copy_pvc" (create if not existing and attach if exists), run in Bash:
tmux new -A -s copy_pvc
  • Short key to detach from tmux session, while keep it running in background, from tmux session:
Press "Ctrl+b"
and then "D" to detach
  • To look at currently running tmux sessions, run in Bash:
tmux ls
  • to attach session with name "copy_pvc" run from Bash again:
tmux new -A -s copy_pvc
  • to finish current tmux session, enter to tmux session with the command above and:
## press Ctrl+D
## or type:
exit

Step 1. Crete a storagecalss

Start tmux session with name "copy_pvc" (create if not existing and attach if exists), run in Bash:

tmux new -A -s copy_pvc

In the session we will have environment variables with names of PVs and pods, so please do not close the tmux session until work is done, otherwise the variables have to be defined again. To create storage class that doesn't require the relabeling (we will use the same name as in RedHat's article) ocs-storagecluster-cephfs-selinux-relabel execute the following command:

cat << EOF | oc apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ocs-storagecluster-cephfs-selinux-relabel
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  fsName: ocs-storagecluster-cephfilesystem
  kernelMountOptions: context="system_u:object_r:container_file_t:s0"
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF

Step 2. Set Instana to Maintenance mode

We need to delete existing PVC and create a new one, while changing storageclass in core manifest, so to be sure we're please schedule maintenance of Instana for few minutes. And stop Instana to perform changes:

oc -n instana-core patch core instana-core --type=merge -p '{"spec":{"operationMode": "maintenance"}}'

Step 3. Set reclaim policy to Retain and delete old PVC

When Core and Unit Instana pods gone, let's patch PV and delete it's PVC, as we will use the same name for PVC, but from different storageclass:

RAW_SPANS_PV_NAME=`kubectl get pvc spans-volume-claim -n instana-core -o jsonpath='{.spec.volumeName}'`
OLD_STORAGE_CLASS_NAME=`oc get pvc spans-volume-claim -n instana-core -o jsonpath='{.spec.storageClassName}'`
OLD_STORAGE_SIZE=`oc get pvc spans-volume-claim -n instana-core -o jsonpath='{.spec.resources.requests.storage}'`
oc patch pv $RAW_SPANS_PV_NAME --type=merge -p '{"spec":{"persistentVolumeReclaimPolicy": "Retain"}}'
oc delete pvc spans-volume-claim -n instana-core

Step 4. Configure Instana to use new storage class

As mentioned at step 1, we will be using newly created storageclass ocs-storagecluster-cephfs-selinux-relabel. We need to set the value in Core manifest of Instana and change Instana operationMode from "maintenance" mode to "normal":

oc -n instana-core patch core instana-core --type=merge -p \
   '{"spec":{"storageConfigs": {"rawSpans": {"pvcConfig": {"storageClassName": "ocs-storagecluster-cephfs-selinux-relabel"}}}}}'

oc -n instana-core patch core instana-core --type=merge -p '{"spec":{"operationMode": "normal"}}'
Now, Instana backend should be running fine with one exception: old calls that were on old PVC will have no details until we copy them. The procedure will take a while and will be performed during Instana operation. Should Instana operation or copy command fail for some reason, the copy command can we executed again (but it might require to define env variables again to point to new pod and PV/PVC names). 

Step 5. Start temporal pod with old PV attached

At the step we're clearing existin claim reference at the old PV to be able to bound it with newly created PVC pointing to old persistent volume. Then we create the PVC to bound with old persistent volume:

CORE_UID=`oc get core instana-core -n instana-core -o jsonpath='{.metadata.uid}'`
oc patch pv $RAW_SPANS_PV_NAME --type=merge -p '{"spec":{"claimRef": null}}'

cat << EOF | oc apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
    volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner
    volume.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner
  resourceVersion: '179413744'
  name: spans-volume-claim-old
  uid: $RAW_SPANS_PV_NAME
  namespace: instana-core
  ownerReferences:
    - apiVersion: instana.io/v1beta2
      kind: Core
      name: instana-core
      uid: ${CORE_UID}
      controller: true
      blockOwnerDeletion: true
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: ${OLD_STORAGE_SIZE}
  volumeName: ${RAW_SPANS_PV_NAME}
  storageClassName: ${OLD_STORAGE_CLASS_NAME}
  volumeMode: Filesystem
EOF

Once PVC bound with PV, we need to create a pod which will be used to copy data from old volume to new. The name of the deployment and pod starts with "aaaaa-..." for convenience, so we can see it above all other pods in OCP dashboards:

IMAGE_NAME=$(oc get pod -l app.kubernetes.io/component=appdata-reader -n instana-core -o jsonpath='{.items[0].spec.containers[0].image}')

cat << EOF | oc apply -f -
kind: Deployment
apiVersion: apps/v1
metadata:
  name: aaaaa-copy
  namespace: instana-core
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: aaaaa-copy
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: aaaaa-copy
      annotations: {}
    spec:
      containers:
      - name: oldpvc
        args:
          - while true; do sleep 30; done; 
        command:
          - /bin/sh
          - -c
          - --
        securityContext:
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          readOnlyRootFilesystem: true
          allowPrivilegeEscalation: false
          seccompProfile:
            type: RuntimeDefault
        image: ${IMAGE_NAME}
        volumeMounts:
        - mountPath: "/opt/raw-spans" # Path inside container
          name: raw-spans-old
        - mountPath: "/mnt/raw-spans" # Path inside container
          name: raw-spans-new
      volumes:
      - name: raw-spans-old
        persistentVolumeClaim:
          claimName: spans-volume-claim-old
      - name: raw-spans-new
        persistentVolumeClaim:
          claimName: spans-volume-claim
EOF

Step 6. Start copying data from old to new PV

We should be still in tmux session, if not enter it as described in the beginning. The reason we run the command below in tmux session is because the copy procedure will take relatively long time depending on the size of the data ans speed of storage subsystem. Start copy data from inside of the created pod:

POD_NAME=$(oc get pod -l app.kubernetes.io/component=aaaaa-copy -n instana-core -o jsonpath='{.items[0].metadata.name}')

time oc exec $POD_NAME -n instana-core -- cp -rP /opt/raw-spans /mnt

Now we can Detach from the session and leave the session running:

Press "Ctrl+b"
and then "D" to detach

we can trace if the copy command still run in the pod by doing:

POD_NAME=$(oc get pod -l app.kubernetes.io/component=aaaaa-copy -n instana-core -o jsonpath='{.items[0].metadata.name}')
oc exec $POD_NAME -n instana-core -- ps -ef

if the process is still running there will be a certain process like cp -rP /opt/raw-spans /mnt in the list.

Step 7. Delete temporal pod and old PV

Start tmux session with name "copy_pvc" (create if not existing and attach if exists), run in Bash:

tmux new -A -s copy_pvc

If it is still copying detach from the session and leave the session running:

Press "Ctrl+b"
and then "D" to detach

or if it is finished, then we can check that old calls have details and proceed with deletion of old data:

oc delete deployment aaaaa-copy -n instana-core
oc delete pvc spans-volume-claim-old -n instana-core
oc delete pv $RAW_SPANS_PV_NAME

Finish the tmux session:

## press Ctrl+D
## or type:
exit

Relevant documentation:


#Administration
#General
#Self-Hosted

Statistics
0 Favorited
12 Views
0 Files
0 Shares
0 Downloads