Cloud Pak for Data

Cloud Pak for Data

Come for answers. Stay for best practices. All we’re missing is you.

 View Only

Setup MinIO and OADP for IBM Cloud Pak for Data air-gapped cluster

By Harris Yang posted Sat December 09, 2023 03:35 AM

  

Setup MinIO and OADP for IBM Cloud Pak for Data air-gapped cluster

Introduction

IBM® Cloud Pak for Data (CPD) supports online and offline backup and restore.

In order to take CPD backup, users have to install OpenShift® APIs for Data Protection (OADP) backup utilities, which can backup Kubernetes metadata and persistent volumes in a Cloud Pak for Data instance project to an S3-compatible object storage. 

This blog introudes you the steps to setup CPD OADP backup utilities with the example of MinIO object storage for an air-gapped cluster.

- OADP

OpenShift API for Data Protection (OADP) is the OpenShift tool used to back up and restore Kubernetes/OpenShift cluster resources and persistent volumes. It is based on the Velero project.

Users can use OADP to back up cluster resources to a local S3 object storage. The OADP Operator installs Velero in the Openshift cluster.

- MinIO

MinIO (https://min.io) is a High-Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with the Amazon S3 cloud storage service. It is capable of working with unstructured data.


1. Prepare a bastion node and install required tools

Bastion node is a RHEL 8 machine which can access both private container registry and internet, and users install the required tools to download images from public image registry, mirror them into private container registry and delete the images after the upgrade. These tools include:

- OpenShift CLI

Users must install a version of the OpenShift CLI that is compatible with your Red Hat OpenShift Container Platform cluster.

Check out the OC CLI

oc version


- podman

Podman (the POD manager) is an open source tool for developing, managing, and running containers on your Linux® systems. Originally developed by Red Hat® engineers along with the open source community, Podman manages the entire container ecosystem using the libpod library. 

Podman’s daemonless and inclusive architecture makes it a more secure and accessible option for container management, and its accompanying tools and features, such as Buildah and Skopeo, allow developers to customize their container environments to best suit their needs. 

Install podman

yum install -y podman


- skopeo

Skopeo is a tool for manipulating, inspecting, signing, and transferring container images and image repositories on Linux® systems, Windows and MacOS. Like Podman and Buildah, Skopeo is an open source community-driven project that does not require running a container daemon (https://github.com/containers/skopeo).

With Skopeo, you can inspect images on a remote registry without having to download the entire image with all its layers, making it a lightweight and modular solution for working with container images across different formats, including Open Container Initiative (OCI) and Docker images.

Install skopeo

yum install -y skopeo


2. Setup MinIO

Mirror MinIO images into private container registry

export PRIVATE_REGISTRY_LOCATION=<private_container_registry_location>
export PRIVATE_REGISTRY_PUSH_USER=<private_container_registry_username>
export PRIVATE_REGISTRY_PUSH_PASSWORD=<private_container_registry_password>
export PRIVATE_REGISTRY_PULL_USER=<private_container_registry_username>
export PRIVATE_REGISTRY_PULL_PASSWORD=<private_container_registry_password>

skopeo copy --dest-tls-verify=false --src-tls-verify=false \
  --dest-creds ${PRIVATE_REGISTRY_PUSH_USER}:${PRIVATE_REGISTRY_PUSH_PASSWORD} \
  docker://docker.io/minio/minio:RELEASE.2021-04-22T15-44-28Z \
  docker://${PRIVATE_REGISTRY_LOCATION}/minio/minio:RELEASE.2021-04-22T15-44-28Z

skopeo copy --dest-tls-verify=false --src-tls-verify=false \
  --dest-creds ${PRIVATE_REGISTRY_PUSH_USER}:${PRIVATE_REGISTRY_PUSH_PASSWORD} \
  docker://docker.io/minio/mc:latest \
  docker://${PRIVATE_REGISTRY_LOCATION}/minio/mc:latest

You can download the releases of MinIO from https://min.io/download#/linux

In this blog, we take this version as example

wget https://github.com/vmware-tanzu/velero/releases/download/v1.6.0/velero-v1.6.0-linux-amd64.tar.gz
tar xvf velero-v1.6.0-linux-amd64.tar.gz

Edit the deployment file to updated image location of MinIO

vi velero-v1.6.0-linux-amd64/examples/minio/00-minio-deployment.yaml

      - name: minio
        image: <private_container_registry_location>/minio/minio:RELEASE.2021-04-22T15-44-28Z
    .
    .
    .
      - name: mc
        image: <private_container_registry_location>/minio/mc:latest


Create velero project

oc new-project velero

Creates the sample MinIO deployment in the "velero" project

oc -n velero apply -f examples/minio/00-minio-deployment.yaml

Create two persistent volumes and update the deployment. Change the storage class and size as needed

cat <<EOF |oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: velero
  name: minio-config-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: ocs-storagecluster-cephfs
EOF


cat <<EOF |oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: velero
  name: minio-storage-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: ocs-storagecluster-cephfs
EOF

oc set volume deployment.apps/minio --add --overwrite --name=config \
  --mount-path=/config --type=persistentVolumeClaim --claim-name="minio-config-pvc" -n velero

oc set volume deployment.apps/minio --add --overwrite --name=storage \
  --mount-path=/storage --type=persistentVolumeClaim --claim-name="minio-storage-pvc" -n velero

Set resource limits for the minio deployment

oc -n velero set resources deployment minio -n velero --requests=cpu=500m,memory=256Mi --limits=cpu=1,memory=1Gi

Check that the MinIO pods up running

oc get pods -n velero

Expose the minio service

oc expose svc minio -n velero

Get the MinIO URL

oc get route minio -n velero

The output should be something like this:

NAME    HOST/PORT                                   PATH   SERVICES   PORT   TERMINATION   WILDCARD
minio   <minio_route_path>                                 minio      9000                 None

export MINIO_ROUTE=$(oc get route minio -n velero | grep minio | awk '{print $2}')
echo ${MINIO_ROUTE}

Probe minio liveness

curl -I -k  http://${MINIO_ROUTE}/minio/health/live

Go to the MinIO web UI and create a bucket named "velero"

Example:

http://<minio_route_path>
User: minio
Password: minio123


3. Setup OADP

Mirror ubi-minimal image into private container registry

podman login registry.redhat.io

skopeo copy --dest-tls-verify=false --src-tls-verify=false \
  --dest-creds ${PRIVATE_REGISTRY_PUSH_USER}:${PRIVATE_REGISTRY_PUSH_PASSWORD} \
  docker://registry.redhat.io/ubi8/ubi-minimal:latest \
  docker://$PRIVATE_REGISTRY_LOCATION/ubi8/ubi-minimal:latest --remove-signatures

Mirror cpdbr-velero-plugin image into private container registry

skopeo copy --dest-tls-verify=false --src-tls-verify=false \
  --dest-creds ${PRIVATE_REGISTRY_PUSH_USER}:${PRIVATE_REGISTRY_PUSH_PASSWORD} \
  docker://icr.io/cpopen/cpd/cpdbr-velero-plugin:4.0.0-beta1-1-x86_64 \
  docker://${PRIVATE_REGISTRY_LOCATION}/cpopen/cpd/cpdbr-velero-plugin:4.0.0-beta1-1-x86_64

Create the "oadp-operator" namespace if it doesn't already exist

oc new-project oadp-operator

Annotate the OADP operator namespace so that restic pods can be scheduled on all nodes

oc annotate namespace oadp-operator openshift.io/node-selector=""

Install Red Hat OADP operator v1.1 or v1.2 from the OperatorHub in the Openshift Web Console

Notes:

1. For Red Hat OADP operator v1.1 or v1.2 , select the "stable" Update Channel in OperatorHub -> OADP Operator -> Install -> Install Operator (Update Channel)
2. The default namespace for OADP operator is "openshift-adp". The examples shown here set the OADP namespace is "oadp-operator". In "Installed Namespace", select "Pick an existing namespace", and choose "oadp-operator".

Create a secret in the "oadp-operator" namespace with the object store credentials. Credentials should use alpha-numeric characters, and not contain special characters like '#'.

cat << EOF > ./credentials-velero
[default]
aws_access_key_id=minio
aws_secret_access_key=minio123
EOF

For OADP 1.x, the secret name must be "cloud-credentials".

oc create secret generic cloud-credentials \
  --namespace oadp-operator \
  --from-file cloud=./credentials-velero

Create a DataProtectionApplication (Velero) instance

cat << EOF > ./dpa-sample.yaml
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
  name: dpa-sample
spec:
  configuration:
    velero:
      customPlugins:
      - image: ${PRIVATE_REGISTRY_LOCATION}/cpopen/cpd/cpdbr-velero-plugin:4.0.0-beta1-1-x86_64
        name: cpdbr-velero-plugin
      defaultPlugins:
      - aws
      - openshift
      - csi
      podConfig:
        resourceAllocations:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 256Mi
    restic:
      enable: true
      timeout: 12h
      podConfig:
        resourceAllocations:
          limits:
            cpu: "1"
            memory: 8Gi
          requests:
            cpu: 500m
            memory: 256Mi
        tolerations:
        - key: icp4data
          operator: Exists
          effect: NoSchedule
  backupImages: false            
  backupLocations:
    - velero:
        provider: aws
        default: true
        objectStorage:
          bucket: velero
          prefix: cpdbackup
        config:
          region: minio
          s3ForcePathStyle: "true"
          s3Url: http://${MINIO_ROUTE}
        credential:
          name: cloud-credentials
          key: cloud
EOF


oc create -f ./dpa-sample.yaml -n oadp-operator

Check that the velero pods are running in the "oadp-operator" namespace.

oc get po -n oadp-operator

The restic daemonset should create one restic pod for each worker node.

End of the blog

0 comments
16 views

Permalink