Cloud Pak for Integration

Cloud Pak for Integration

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Restoring Cloud Pak for Integration workload using GitOps & OADP

By Hasan Rizvi posted Fri October 24, 2025 05:57 AM

  

    Introduction

    You can use RedHat Openshift APIs for Data Protection — which uses Velero under the covers and Red Hat OpenShift GitOps (powered by Argo CD) — to run a manual, Git-driven restore from an already-existing backup in object storage. You do this with in a single namespace with Cloud Pak for Integration workload (API Connect, MQ, App Connect, Platform UI) as the running example. 

    To set up and create a backup that can later be used for restoration, follow the steps in Backing up and restoring Cloud Pak for Integration, in the IBM documentation. You can use your own workload, or set up a sample workload by using this tutorial Tutorial: Using the assembly canvas to create messaging workflows with Kubernetes resources to create deployments that you can back up and restore.

    The goal is a safe, auditable, and repeatable restore flow: you declare a Velero Restore in Git, Argo CD (with the right RBAC) applies it, and you intentionally re-trigger restores by bumping an annotation and committing the change. This procedure is most applicable for platform/SRE and app teams that already have backups in an S3-compatible bucket and want Git to be the control plane for recovery. We don’t include backup creation so we can focus on the restore path. Instead, we focus on defining the Restore CR, choosing scope (namespace and any needed cluster resources), and deciding overwrite behavior (create-only vs. update). A successful restore will recreate the namespace with expected CRDs/CRs, pods becoming Ready, data on PVCs restored where relevant, Argo CD showing Synced, and Velero showing '`Completed`.

    Repo quick links

    These are the links for all the code resources used during this tutorial

    Argo CD app (restore): app-restore.yml

    Argo CD app (DPA bootstrap, optional): app-dpa.yml

    Data Protection Application: dataprotectionapplication/DataProtectionApplication.yml

    Restore CR (the heart of it): restore/velero-restore.yml

    RBAC for Argo CD controller: rbac-premissions.yaml

    (Reference) Example backup used by restore: backups/backup-hasan-oadp-demo.yaml

    By the end, you’ll have a small, reviewable set of YAMLs in Git that let you confidently and repeatedly restore a CP4I namespace (or any app namespace) with a one-line commit—with no bespoke scripts or manual Velero invocations required, and a clean audit trail.

    Prerequisites

    NOTE: User also has the option to bringing their own workload

    Install OpenShift  (Argo CD)

    If Argo CD is already setup on your RHOCP cluster then you can skip this step.

    To get started, install the Argo CD operator from OperatorHub. By default, this installs OpenShift GitOps into the `openshift-gitops` namespace, and for the purposes of this walkthrough, enable cluster monitoring on the desired namespace:

    oc label namespace <YOUR NAMESPACE> openshift.io/cluster-monitoring=true

    in our case it is

    oc label namespace oadp-ns openshift.io/cluster-monitoring=true

    In this guide all our workload is installed in a namespace called oadp-ns.

     

    Once the operator is up and running, you’ll have an Argo CD instance ready to manage GitOps workflows.

    Red Hat OpenShift GitOps automatically creates a ready-to-use Argo CD instance in the openshift-gitops namespace and an Argo CD icon is displayed in the console toolbar. You can see in the screenshot how you can access the Argo CD instance UI.

     

     

    Log in with the admin account (username: admin) and retrieve the password from the <argo_cd_instance_name>-cluster Secret under admin.password in openshift-gitops namespace. 

    If you are a cluster-admin user, you have the option to log in via OpenShift SSO by selecting LOG IN VIA OPENSHIFT in the Argo CD UI.

    Follow this link https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.17/html/installing_gitops/installing-openshift-gitops#installing-gitops-operator-using-cli_installing-openshift-gitops for more information.

    Declare the Velero Restore

    The core of the workflow is declaring a Velero Restore resource, which tells Velero exactly what to recover from an existing backup. You can either create your own custom restore YAML or use the provided example restore/velero-restore.yml. This resource specifies the backup name to restore from, the namespaces and cluster resources to include, whether to restore persistent volumes, and the policy for handling existing resources. To make the restore safely repeatable, an annotation (metadata.annotations.restore.trigger) is used—each time you increment its value, Argo CD will reapply the restore and Velero will run it again. This file is what drives the actual recovery of your workloads, data, and configurations.

    Below is the explanation of important key/value pairs in Restore Yaml to look out for:

    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: hasan-oadp-demo-oadp-ns-3 # So that we don't have to delete the old restores
      namespace: openshift-adp
      annotations:
        argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true <ARGO CD WILL SKIP DRY-RUN VALIDATION ERRORS IF REFERENCED CRDS/RESOURCES AREN’T PRESENT YET, ALLOWING SYNC TO CONTINUE.>
        restore.trigger: v3  <A SIMPLE “BUMP-THIS” VALUE USED TO FORCE ARGO CD AUTO-SYNC TO RE-APPLY THE OBJECT (CHANGING THE ANNOTATION MAKES ARGO CD SEE A DIFF)>
    spec:
      backupName: <THE EXACT VELERO BACKUP TO RESTORE FROM> (must already exist in your backup location)
      includeClusterResources: true <RESTORE CLUSTER-SCOPED OBJECTS E.G., CRDS, CLUSTERROLES, STORAGECLASSES>
      existingResourcePolicy: update <IF A RESOURCE ALREADY EXISTS, VELERO WILL ATTEMPT TO PATCH/UPDATE IT INSTEAD OF SKIPPING>
      restorePVs: true <RESTORE PERSISTENT VOLUMES>
      includedNamespaces:<RESTORE ACROSS ALL NAMESPACES>
      - '*'
      itemOperationTimeout: 1h0m0s
    .
    .
    .

    Give Argo CD the right RBAC in openshift-adp namespace

    To allow Argo CD to manage OADP and Velero resources, you need to grant it the right RBAC permissions in the openshift-adp namespace. The `Argo CD Application Controller` in the openshift-gitops namespace doesn’t have permission to act on resources in other namespaces, including openshift-adp where OADP runs. When you ask Argo CD to sync a Restore or DataProtectionApplication manifest, it tries to create or update those CRs inside openshift-adp.

    This is done by creating a Role and RoleBinding that let the `Argo CD Application Controller` (openshift-gitops-argocd-application-controller) create and update OADP custom resources such as DataProtectionApplication, Backup, and Restore. Without these permissions, syncs will fail when Argo CD tries to apply the restore configuration. You can apply the provided rbac-premissions.yaml, which binds the controller to manage OADP/Velero CRs inside openshift-adp.

    Command:

    oc apply -f https://github.com/demo-test-source/backup-restore-using-gitops/raw/main/rbac-premissions.yaml

    NOTE: This only needs to be done once as a setup step

    Create the Argo CD Applications

    To manage restores with Git, you need to create Argo CD applications that point to the Git repo paths for the `restore` resources (and optionally the `DPA` bootstrap). Argo CD will continuously apply and reconcile these manifests, and with `Auto-Sync` enabled, simply bumping the restore annotation in Git will trigger the restore again. The restore application should always be deployed, configured with server: https://kubernetes.default.svc, namespace: openshift-adp, path: restore, and syncPolicy.automated (with prune and selfHeal), plus syncOptions: ["CreateNamespace=true"]. If you also want Argo CD to manage your Data Protection Application, you can apply the optional DPA app. Once applied, these manifests create visible `Applications` in the Argo CD dashboard, where you can monitor their sync and health status.

    How to apply:

    • Apply the restore Argo CD Application (required):

    oc apply -n openshift-gitops -f https://github.com/demo-test-source/backup-restore-using-gitops/raw/main/app-restore.yml

    • Apply the DPA Argo CD Application (optional):
    oc apply -n openshift-gitops -f https://github.com/demo-test-source/backup-restore-using-gitops/raw/main/app-dpa.yml

    • After applying, open the `Argo CD dashboard` (from the OpenShift console menu → OpenShift GitOps → Cluster Argo CD). You’ll see the new Applications listed, where you can confirm they’re in `Synced` state and monitor their health.

    Testing: prove it works end-to-end

    Pre-flight checks

    Confirm your backup exists and Velero can see it.

    Avoid a restore failing on a typo’d `backupName`.

    # List backups Velero knows about
    
    velero get backup -n openshift-adp
    
    # (Optional) Describe the named backup used in restore/velero-restore.yml
    
    velero -n openshift-adp describe backup <BACKUP_NAME>

    Functional test: delete → restore

    Simulate the disaster and recover.

    Verify Argo CD + OADP wiring, CRDs, RBAC, and storage config.

    How:

    1) Delete the target namespace (that exists in the backup)

    oc delete ns <YOUR_CP4I_NS>

    2) Trigger the restore (bump annotation in Git and push)

    metadata.annotations.restore.trigger: v1 -> v2 or higher

    3) Watch restore progress

    oc -n openshift-adp get restore -w

    4) Verify objects return

    oc get ns <YOUR_CP4I_NS>
    oc -n <YOUR_CP4I_NS> get all

    Verify -data- gets restored (PVCs)

    Check if PVCs are created

    oc -n <YOUR_CP4I_NS> get pvc

    Test your keycloak user are present

    Test the workload

    This pattern keeps backup creation and restore execution cleanly separated, uses Git as the manual trigger, and gives you a simple, repeatable way to recover a deleted namespace—including your/CP4I workloads—without introducing risky automation.

     

    0 comments
    27 views

    Permalink