Cloud Pak for Integration

 View Only

Backing and restoring Cloud Pak for Integration using OADP

By Giacomo Chiarella posted Wed February 07, 2024 09:22 AM

  

Introduction

Starting with IBM Cloud Pak for Integration 2023.4.1, users can now backup and restore components of the Cloud Pak for Integration by using Red Hat OpenShift API for Data Protection (OADP).

The OADP mechanism offers a fast, simple, and consistent way of backing up the Cloud Pak. Currently, this works with:

  • Operators (and related artefacts)
  • Platform UI
  • Declarative API
  • Declarative API Product
  • Automation assets
  • App Connect resources
  • Event Streams resources

This initial release is a great introduction to the technology, and I encourage users to try it. Not all components are supported at the moment, but all feedback is greatly encouraged. One component missing is identity and access management, however the above will allow you to restore critical workload.

Throughout this blog I'll go through a example backup and restore, and highlight some key features.

For detailed instructions on how to use OADP with the Cloud Pak, see Backing up and restoring IBM Cloud Pak for Integration.

For more information on OADP, see Introduction to OpenShift API for Data Protection.

Example backup and restore

I will walk through the process of backing up and restoring the Cloud Pak using OADP. I will do it with a fresh install of 2023.4.1.

Configuring OADP

You will need an s3 location for the backups. The simplest to use is IBM Cloud Object Storage service. The "Lite" plan is free to use, and gives you enough storage try the feature.

  1. Go to IBM Cloud, and create a Cloud Object Storage instance.
  2. Go to the instance created, and create a bucket. You can use the "Quickly get started" template.
  3. Make a note of the service credentials (or you can come back here later). You'll need the ones with the "hmac" suffix:

Now you can install and configure OADP on the OpenShift cluster.

  1. Install the OADP Operator. This can be done by going to OperatorHub, search for "OADP", selecting the RedHat operator, and installing the operator with all the default configuration.
  2. Create a secret in the openshift-adp namespace with the credentials to your IBM Cloud s3 location:
    • Get the access key and the key id from the "cos_hmac_keys" section in IBM Cloud.
    • Notice that the formatting follows the AWS configuration. For more information, see the Configuring OADP with AWS.
      kind: Secret
      apiVersion: v1
      metadata:
        name: cloud-credentials
        namespace: openshift-adp
      stringData:
        cloud: |
          [default]
          aws_secret_access_key=<yourkey>
          aws_access_key_id=<yourkeyid>
      type: Opaque
  3. Create a DataProtectionApplication resource. This defines your storage locations:
    • Change the "region" and "s3Url" fields if you didn't create the Cloud Object Storage in "us-south".
    • Add the name of the bucket you created in Cloud Object Storage.
    • Add a prefix, this can be anything, to identify your backups.
      apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        name: integration
        namespace: openshift-adp
      spec:
        backupLocations:
          - velero:
              config:
                insecureSkipTLSVerify: 'true'
                region: us-south
                s3ForcePathStyle: 'true'
                s3Url: 'https://s3.us-south.cloud-object-storage.appdomain.cloud'
              credential:
                key: cloud
                name: cloud-credentials
              default: true
              objectStorage:
                bucket: <bucket-name>
                prefix: <new-prefix-of-your-choice>
              provider: aws
        configuration:
          restic:
            enable: true
          velero:
            customPlugins:
              - image: >-
                  cp.icr.io/cp/appc/acecc-velero-plugin-prod@sha256:146e94e3419d0497d0a641b6dd615a69a4efde290781041db2a639862de73926
                name: app-connect
              - image: >-
                  cp.icr.io/cp/icp4i/ar-velero-plugin:1.6.0-2023-12-05-0925-7f53ade0@sha256:0926ee19e22f7ce454dafb168b228ee04294954fc7fee2a99e1ce045313febb7
                name: integration
            defaultPlugins:
              - openshift
              - aws
            logLevel: debug
      
  4. Validate that the application was created successfully by checking the BackupStorageLocation resource in the openshift-adp namespace. It should have a phase of "Available":

Labelling resources for backup

Now that the OADP application is ready, you need to label all the resources you want to backup. Details on how the labels work can be found in the Label the instances to back up section.

A few considerations for the label commands below:

  • I have installed my operators in A single namespace on the cluster mode, which means I also need to label the OperatorGroup resource.
  • I am restoring into the same namespace on the same cluster, so I am not labelling the catalog sources for backup. However, labelling the catalog sources is easy to do if you also wanted to restore them as well on a new cluster.

You can do the labelling fast using the CLI:

  1. Change namespace to the namespace where the Cloud Pak is installed:
    oc project <namespace>
  2. Label all the Subscriptions, I labelled all the subscriptions with the same label:
    oc label subscription ibm-integration-platform-navigator backup.integration.ibm.com/component=subscription      
    oc label subscription ibm-appconnect backup.appconnect.ibm.com/component=subscription    
    oc label subscription ibm-integration-asset-repository backup.eventstreams.ibm.com/component=subscription
    oc label subscription ibm-common-service-operator backup.eventstreams.ibm.com/component=subscription      
  3. Label all the instances that you have in that namespace. In my case, those are:
  4. oc label platformnavigator --all backup.integration.ibm.com/component=platformnavigator  
    oc label assetrepository --all backup.integration.ibm.com/component=assetrepository
    oc label dashboard --all backup.appconnect.ibm.com/component=dashboard
    oc label designerauthoring --all backup.appconnect.ibm.com/component=designerauthoring

You can also use the Platform UI to label, or check the labels for each instance. For example, in this other environment I can see my Kafka cluster and my Automation assets have the labels:

The labelling experience also has auto-complete to help you re-use backup labels that you might have on other instances:

Backup and restore

Backing up and restoring is as simple as creating a Backup resource, and a Restore resource.

  1. To create the backup, create this Backup resource.
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: integration
      namespace: openshift-adp
    spec:
      ttl: 720h0m0s
      defaultVolumesToRestic: false
      includeClusterResources: true
      includedNamespaces:
      - '*'
      orLabelSelectors:
      - matchExpressions:
        - key: backup.integration.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - assetrepository
          - platformnavigator
          - secret   
      - matchExpressions:
        - key: backup.apiconnect.ibm.com/component
          operator: In
          values:
          - api
          - product  
      - matchExpressions:
        - key: backup.appconnect.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - configuration
          - dashboard
          - designerauthoring
          - integrationruntime
          - integrationserver
          - switchserver
      - matchExpressions:
        - key: backup.eventstreams.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - eventstreams
          - kafkaconnect
          - kafkatopic
          - kafkauser
          - kafkabridge
          - kafkaconnector
          - kafkarebalance
    
  2. This will create a backup in IBM Cloud. You can see the backup files if you check Cloud Object Storage:
  3. You can also use the Velero CLI to see the backup:
  4. velero backup describe integration -n openshift-adp --details

Once the backup is successful, you will see it show on OpenShift as well, alongside any other backups you have in the same s3 bucket. I then proceeded to delete all the instances and operators in my namespace, to simulate a data loss. I removed all the PersistentVolumeClaims as well.

To restore:

  1. Create the Restore resource:
  2. apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: integration
      namespace: openshift-adp
    spec:
      backupName: integration
      includeClusterResources: true
      existingResourcePolicy: update
      restorePVs: true
      restoreStatus:
        includedResources:
        - api
        - product
      hooks: {}
      includedNamespaces:
      - '*'
      itemOperationTimeout: 1h0m0s
      orLabelSelectors:
      - matchExpressions:
        - key: backup.integration.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - assetrepository
          - platformnavigator
          - secret
      - matchExpressions:
        - key: backup.apiconnect.ibm.com/component
          operator: In
          values:
          - api
          - product
      - matchExpressions:
        - key: backup.appconnect.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - configuration
          - dashboard
          - designerauthoring
          - integrationruntime
          - integrationserver
          - switchserver
      - matchExpressions:
        - key: backup.eventstreams.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - eventstreams
          - kafkaconnect
          - kafkatopic
          - kafkauser
          - kafkabridge
          - kafkaconnector
          - kafkarebalance
  3. You will see operators, instances and pods come back into the same namespace.

Get the new admin password (as identify and access management is not currently backed up and restored), and you can access the Platform UI to see all the workloads running again!

Hopefully the above guide gives a good overview of how to try OADP with the Cloud Pak. All of the steps should be possible in under an hour. You can also do more complex upgrade strategies with:

  • Different labels (For example, one label for the production workload, one for the UAT workload)
  • Having different backup CRs for different components (For example, one that does operators and another that does instances)
  • Using backup schedules
  • Integrate this with CICD (For example, backup the operators / stateless components with GitOps and restore the instances with OADP)

Think about how you can use OADP to simplify your backup and restores. I look forward to hearing any feedback and please do not hesitate to get in touch. Further documentation can be found here: Backing up and restoring IBM Cloud Pak for Integration.

0 comments
73 views

Permalink