Containers, Kubernetes, OpenShift on Power

 View Only

Red Hat OpenShift Operator certification for IBM Power

By Mayur Waghmode posted Mon September 19, 2022 02:11 PM

  

There is a vast amount of information about Operators and why they are ideal for packaging, deploying, managing, and distributing containerized applications and microservices on the Red Hat OpenShift Container Platform. So, if you’re just getting started with Operators, I recommend some of the following resources to answer any questions you may have. Otherwise, if you’re ready to realize the benefits of certifying your IBM Power Operator on Red Hat OpenShift and want to know how to do it, continue reading the tutorial.

Why certify?

Certifying your IBM Power-based Operators on Red Hat OpenShift can improve the time-to-value for your customers because certified Operators use components that have been tested on OpenShift and they provide a consistent packaging, deployment, and management experience across hybrid cloud environments. Also, certification ensures that your Operators and their containers are fully supported when used with OpenShift.

The good news is that the certification process for IBM Power Operators differs very little from the other architectures. The purpose of this tutorial is to describe the high-level steps and then highlight the Power specific considerations to ensure that your certification goes smoothly.

Note: Certifying all container images referenced in an Operator Bundle is necessary. So, before proceeding with Red Hat Operator certification, make sure that all operator images are certified and published in the Red Hat Ecosystem Catalog. If they are not already certified, then follow the instructions to certify them.

Let’s get started

To certify your Power Operator on OpenShift, you’ll need to register your company as a Red Hat Technology Partner, if it isn’t already, and then follow the certification workflow defined in the Red Hat OpenShift Operator Certification Program.

The program enables you to certify your Operators and distribute them on OperatorHub and Red Hat Marketplace. The benefit of this is that users can easily install your software on their OpenShift clusters from either source with the assurance that it’ll be monitored and updated to help reduce interoperability failure or security risks. Beyond that, certified OpenShift Operators are jointly supported by Red Hat and their partners.

Workflow

This section describes the three primary workflows of the certification process: onboarding, testing, and publication. As I mentioned, many of the steps for Power are the same as for any other architecture. So, instead of rewriting the entire process, I’ve provided a list of each step, highlighting the IBM Power-specific tasks (if any), and pointing to the Red Hat documentation for the detailed step-by-step guidance.

If you’re more of a visual learner, check out this video guide I created.

Certification onboarding

The onboarding workflow includes the following tasks.

  1. Create an OpenShift Operator bundle project.
  2. Get a project ID (PID). After creating the Operator bundle project, you’ll see the PID in the Certification documentation section. Copy the PID. You will add it to the ci.yaml file of your Operator bundle Image later.
    Note: Don’t copy the “ospid-“ part of the PID. For example, if the PID is ospid-6205e94ce2200758b20df2c1, then only copy 6205e94ce2200758b20df2c1
  3. Configure the Operator bundle and complete the pre-certification checklist.
  4. (Optional) Publish the Operator bundle to Red Hat Marketplace.
  5. Get API key.
    Important: Make sure that you copy the API key because you won’t be able to view it again.

Certification testing

The testing workflow is shown in Figure 1. In this case, OpenShift pipelines based on Tekton is used to run the certification tests. OpenShift pipelines enables you to view comprehensive logs and debug information in real-time. When you are ready to certify and publish your Operator bundle, the pipeline submits a pull request (PR) to GitHub on your behalf and if everything passes successfully, your Operator is automatically merged and published in the Red Hat Container Catalog and the embedded OperatorHub in OpenShift.

Figure 1: Overview of running the certification test suite locally

Figure 1

Before you begin, you will need the following:

  • OpenShift cluster version 4.8 or later installed. I ran the certification test suite on an OpenShift on Power cluster in IBM Power Virtual Server.
    Power specific notes:
    1. From a terminal window, run the following command to log into your Power Virtual Server OpenShift cluster:
      ssh -i <ssh_key> root@<bastion_node_ip>
       oc login -u kubeadmin -p <kubepassword>​
    2. The continuous integration (CI) pipeline will make a persistent volume claim (PVC) for a 5 GB volume. Make sure that you have set up the Network File System (NFS) server and configure it on your cluster for persistent storage.
  • A kubeconfig file for an admin user who has cluster admin privileges.
  • A valid Operator bundle
  • The OpenShift CLI tool (oc) version 4.7.13 or later installed.
  • The Git CLI tool (git) version 2.32.0 or later installed.

    yum install git -y git version
  • The Tekton CLI tool (tkn) version 0.19.1 or later installed.

    curl -LO https://github.com/tektoncd/cli/releases/download/v0.22.0/tkn_0.22.0_Linux_ppc64le.tar.gz sudo tar xvzf tkn_0.22.0_Linux_ppc64le.tar.gz -C /usr/local/bin/

Then, perform the following steps to complete the certification testing:

  1. Add your Operator bundle.
  2. Fork the repository.
  3. Install the OpenShift Pipeline Operator.
  4. Configure the OpenShift (oc) CLI tool.
  5. Create an OpenShift Project namespace.
  6. Add the kubeconfig secret.
  7. Import Operator from Red Hat Catalog.
  8. Grant the anyuid security context constraints (SCC) to the default pipeline service account. This will avoid any pipeline permission issues on Power. From a terminal window, run the following command:
    oc adm policy add-scc-to-user anyuid -z pipeline
  9. Install the certification pipeline dependencies on the cluster.
  10. Configure the repository to submit the certification results.
    1. Add the GitHub API token.
    2. Add the Red Hat Container API access key.
    3. Enable digest pinning. This step is mandatory to submit the certification results to Red Hat.
    4. (Optional) Use a private container registry. By default, the pipeline creates images in the OpenShift Container Registry on the cluster. If you want to use an external private registry, then you must provide credentials by adding a secret to the cluster.
  11. Run the OpenShift Operator pipeline. There are three different methods:
    • Run the minimal pipeline.
    • Run the pipeline with image digest pinning.
    • Run the pipeline with a private container registry.
      Regardless of the method used, for Power, you must add the following line to the tkn pipeline command, as described at the link above.
      --pod-template templates/crc-pod-template.yml \
      After running the commands, you will be prompted for several additional parameters. Edit the value of the param pipeline_image parameter from the default value, which is, quay.io/redhat-isv/operator-pipelines-images:released to quay.io/redhat-isv/operator-pipelines-images:multi-arch. Accept the default values for the remaining parameters.
  12. Submit the certification results. Complete the prerequisite steps and then complete one of the four methods for submitting your results:
    • Submit with the minimal pipeline.
    • Submit with the image digest pinning.
    • Submit with the private container registry.
    • Submit with image digest pinning and from a private container registry.
      Regardless of the method used to submit the results, for Power, you must add the following line to the tkn pipeline command, as described at the link above.
      --pod-template templates/crc-pod-template.yml \
      After running the commands, you will be prompted for several additional parameters. Edit the value of the param pipeline_image parameter from the default value, which is, quay.io/redhat-isv/operator-pipelines-images:released to quay.io/redhat-isv/operator-pipelines-images:multi-arch. Accept the default values for the remaining parameters.
  13. Fix issues, if any. When you run the pipeline, you will be able to view logs that contain details about any errors or failures that need to be addressed before obtaining certification.

    • If you ran the pipeline using the tkn CLI tool, logs will be printed in the terminal.
    • If you have access to the OpenShift console, you can review the results of the pipeline and the logs from the console as well.
    • If you encounter the error, the server doesn’t have a resource type “clusterversion”, you must delete the kubeconfig secret and create a new one by running the following commands: 

      //Login to the Openshift cluster oc login -u kubeadmin -p <kubepassword> //Delete current kubeconfig secret oc delete secret kubeconfig //set the Kubconfig variable export KUBECONFIG=/root/.kube/config //Create the kubeconfig secret again oc create secret generic kubeconfig --from-file=kubeconfig=$KUBECONFIG

After achieving successful results for all the certification checks, your pull request will be sent to the upstream repository for verification and publication in the Operator catalog. Your pull request will trigger the Operator hosted pipeline; it will need to again pass all the certification checks. If there are any issues, then follow the tips in the Troubleshooting the Operator Cert Pipeline guide to resolve them. You don’t need to run the pipeline again.

Publish the certified Operator

After all the tests have passed successfully, and the certification pipeline is enabled to submit results to Red Hat, the certification is considered complete, and your Operator will appear in the Red Hat Container Catalog and embedded OperatorHub within OpenShift.

Thanks for reading! I hope you found this tutorial helpful.

Originally published on IBM Developer

Permalink