Red Hat OpenShift Pipelines Operator allows you to set up continuous integration and continuous delivery (CI/CD) pipelines. When using OpenShift Pipelines, the intention is to run each step of the CI/CD pipeline in its own container, enabling each step to scale self-sufficiently to meet the demands of the pipeline. It allows developers to build, test, and deploy applications across multiple cloud providers and on-premises clusters.
This tutorial is intended for users who want to set up tasks and pipelines using Red Hat OpenShift Pipelines and accomplish tasks such as building a Java maven GitHub project and further creating an image of the repository. It also explores custom tasks to run bash scripts on the underlying Red Hat OpenShift Container Platform cluster.
We ran and validated this tutorial on IBM Power, and the good news is that the steps for Power do not differ at all from the steps for other architectures.
Red Hat OpenShift Pipelines Operator is a cloud-native, CI/CD solution based on Kubernetes resources that use Tekton building blocks to automate deployments across multiple platforms. Pipelines and tasks are the two components of the Red Hat OpenShift Pipelines operator. A pipeline in the execution process is called PipelineRun. A task in the execution process is called TaskRun. The operand is the managed workload (that includes both PipelineRun and TaskRun) provided by the operator as a service.
Tasks are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of input and output statements. Tasks are reusable and can be used on multiple pipelines.
Steps are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs in a separate container within that pod. The steps can access the same volumes for caching files and configuring maps and secrets because they run within the same pod.
Prerequisite
Make sure that you have a fully functional Red Hat OpenShift cluster with at least one storage class (to supply storage to the pipeline tasks).
Note: In our tutorial, the NFS storage class is used.
Estimated time
If the cluster is ready to use, then the following tasks can be completed in 2 hours approximately.
Steps
1. Create the pipeline's operator from the user interface.
- Log in to the Red Hat OpenShift cluster and in the left pane, click Operators > OperatorHub.
- On the OperatorHub page, scroll down to the Source section and select Red Hat as the source.
- Type pipeline as the keyword to filter the Red Hat OpenShift Pipeline and click the Red Hat OpenShift Pipelines.
- On the Red Hat OpenShift Pipelines page, click Install.
- On the installation page, retain the default options and click Install.
- The installation process takes approximately a minute to complete. Refer to the following figure to see a sample of an operator installed successfully created using Red Hat OpenShift Pipelines.
2. Create tasks using Red Hat OpenShift Pipeline
You can create custom tasks and run a shell script inside that task to install an operator, uninstall an operator, create projects, and so on. The example in this section shows how to create and customize a task, run a shell script, and create an operator.
- Log in to the cluster using the
oc command from a terminal and create a project.
[root@pe2-fedora31 ]# oc new-project demo-pipelines
- Apply the ClusterRoleBinding yaml file for the new namespace. The following content is an example to set up the ClusterRoleBinding configuration for your project (in this example, it is
codedemo-pipelines
).
[root@pe2-fedora31 ]# cat setup.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pipeline-role-binding
subjects:
- kind: ServiceAccount
name: pipeline
namespace: demo-pipelines
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
[root@pe2-fedora31 ]# oc apply -f setup.yaml
clusterrolebinding.rbac.authorization.k8s.io/pipeline-role-binding created
[root@pe2-fedora31 ]#
- In this example, a custom task is created to run a shell script and to install a Jaeger Operator. To create a custom task, perform the following steps:
- Create a task (shell-task.yaml) to run shell script and create a Jaeger Operator from the Red Hat catalog.
Note: This task can be elaborated to run a set of complete bash scripts. You have to pre-build a test image with the necessary packages and dependencies and then use the same test image to run development or testing procedures, instead of using an openshift4/ose-cli
or ubi
image.
shell-task.yaml:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: shell-task
spec:
params:
- name: input_data
type: string
description: Input text
default: Welcome to Pipelines
steps:
- name: welcome-step
image: registry.redhat.io/openshift4/ose-cli:v4.10
script: |
#!/usr/bin/env bash
echo "#Shell Task: Simple shell script task"
echo "#Shell Task: $(params.input_data)"
echo "#Create Jaeger Operator"
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: jaeger-product
namespace: openshift-operators
spec:
channel: stable
name: jaeger-product
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
sleep 20
- The following task cleans a namespace.
oc-cli-task.yaml:
apiVersion: tekton.dev/v1beta1
kind: ClusterTask
metadata:
name: clean-namespaces
spec:
steps:
- name: remove-foo-project
image: registry.redhat.io/openshift4/ose-cli:v4.10
command: ["/bin/bash"]
args:
- "-c"
- "oc delete project foo --ignore-not-found=true"
- Create a pipeline to run the custom task. The following pipeline.yaml file will first run the
shell-task
task defined in the shell-task.yaml file and then runs the clean-namespaces
ClusterTask defined in the oc-cli-task.yaml file. The pipeline.yaml file contains the details of the tasks to be run in the pipeline.
pipeline.yaml:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: test-pipeline
spec:
params:
- name: input_data
type: string
description: Input text
default: Welcome to Pipelines
tasks:
- name: shell-task
taskRef:
name: shell-task
kind: Task
- name: clean-namespaces
taskRef:
name: clean-namespaces
kind: ClusterTask
runAfter:
- shell-task
- In the PipelineRun.yaml file, specify the pipeline (
test-pipeline
) to be evoked and provide the parameters required at runtime. The PipelineRun.yaml file is the triggering point for the execution of the pipeline.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: test-pipeline-
spec:
params:
- name: input_data
value: Pipelines sample task
pipelineRef:
name: test-pipeline
timeout: 15m
- Create Red Hat OpenShift Pipelines resources such as task, pipeline by using the previously created YAML files.
[root@pe2-fedora31 samples]# oc apply -f shell-task.yaml
task.tekton.dev/shell-task created
[root@pe2-fedora31 samples]#
[root@pe2-fedora31 samples]# oc apply -f oc-cli-task.yaml
clustertask.tekton.dev/clean-namespaces created
[root@pe2-fedora31 samples]#
[root@pe2-fedora31 samples]# oc apply -f pipeline.yaml
pipeline.tekton.dev/test-pipeline created
[root@pe2-fedora31 samples]#
- Run the pipeline-run.yaml file to trigger the pipeline.
Note: The custom task of creating a Jaegar Operator occurs only after running the pipeline-run.yaml file.
[root@pe2-fedora31 samples]# oc create -f pipeline-run.yaml
pipelinerun.tekton.dev/test-pipeline-l2l9ns created
- Notice that the execution of the pipeline and the logs are displayed on the Red Hat OpenShift console on the Logs tab.
To view the logs, click Pipelines --> Pipelines, and select the project (in this example, it is demo-pipelines) from the drop-down list.
- Use the
jib-maven ClusterTask
feature of pipelines to build a Java maven GitHub project.
Note: The following steps explain how to use a pipeline to clone a Java maven GitHub project, create a Maven build, and create an image of the GitHub repository. The jib-maven ClusterTask
provides the Maven build and the image creation feature. Jib is a Java containerizer from Google that helps Java developers build images using build tools such as Maven and Gradle. You do not need any prior knowledge of installing Docker or maintaining Docker files to use Jib.
The pre-existing cluster task provided by the pipeline are: git-clone
and jib-maven
. You do not need to define task (.yaml) files to set up the pipeline, because it is pre-defined. Here, you only need the pipeline (.yaml) files.
- Create
shared-pvc
(PersistentVolumeClaim) to share the workspace between two tasks: The pipeline has jib-maven
and git-clone
tasks, and a shared workspace to set up a persistent volume on the cluster.
shared-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- Specify the task references (
taskRef
) in the pipeline.yaml file: The git-clone task uses the Git repository URL's param
value to clone the repository in the provided workspace. Initially, you cloned the source code repository with the help of the git-clone
task. The jib-maven
task builds a Java maven GitHub project, and then creates an image of the project with the help of Jib. You have to use the local image registry in this example to push the image.
pipeline.yaml:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: jib-maven-pipeline
spec:
params:
- name: SUBDIR
description: where to clone the git repo
default: jib-maven
workspaces:
- name: source
tasks:
- name: clone-git-repo
taskRef:
name: git-clone
kind: ClusterTask
workspaces:
- name: output
workspace: source
params:
- name: url
value: https://github.com/che-samples/console-java-simple
- name: subdirectory
value: $(params.SUBDIR)
- name: deleteExisting
value: "true"
- name: build
taskRef:
name: jib-maven
kind: ClusterTask
runAfter:
- clone-git-repo
workspaces:
- name: source
workspace: source
params:
- name: DIRECTORY
value: $(params.SUBDIR)
- name: IMAGE
value: image-registry.openshift-image-registry.svc:5000/$(context.pipelineRun.namespace)/jib-maven
- name: INSECUREREGISTRY
value: "false"
- name: MAVEN_IMAGE
value: docker.io/maven:3.6.3-adoptopenjdk-11
- Run the following code snippet (pipeinerun.yaml) to start the execution of the pipeline (
jib-maven-pipeline
) and to create a PipelineRun.
pipelinerun.yaml:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: jib-maven-run-
spec:
pipelineRef:
name: jib-maven-pipeline
workspaces:
- name: source
persistentVolumeClaim:
claimName: shared-pvc
timeout: 15m
- After creating a pipeline, run the pipeline to build a Java Maven GitHub project. It also creates an image of the project and pushes it to the local repository (
image-registry.openshift-image-registry.svc:5000/$(context.pipelineRun.namespace)/jib-maven
).
- Run the shared-pvc.yaml, pipeline_run.yaml, and pipeline.yaml files.
[root@pe2-fedora31 jib-maven]# oc apply -f pipeline.yaml
pipeline.tekton.dev/jib-maven-pipeline created
[root@pe2-fedora31 jib-maven]# oc create -f tests/run.yaml
pipelinerun.tekton.dev/jib-maven-run-kzrlq created
[root@pe2-fedora31 jib-maven]#
- After running the pipelinerun.yaml file, notice that the logs for the successful build of the project and image creation are displayed on the Red Hat OpenShift console.
To view the logs, click Pipelines --> Pipelines and select the project (in this example, demo-pipelines) from the drop-down list.
- For future executions of the pipeline, you can just run the YAML file.
Summary
We did the following in the given tutorial:
- We explained ways of creating tasks to run shell and oc-cli commands.
- With the help of Red Hat OpenShift Pipeline we cloned the repository, built the repository, created an image of the repository, and pushed the image to the local registry.References
#Featured-area-1 #Featured-area-1-home