z/OS Container Extensions (zCX) - Group home

Expanding Multi-Architecture OpenShift Cluster with IBM zCX for OpenShift Compute Node Instances

  

Introduction

An OpenShift Container Platform cluster featuring multi-architecture compute machines is designed to accommodate diverse computing architectures.  Specifically, this cluster configuration is accessible on IBM Z user-provisioned infrastructures, wherein the control plane machines operate on the x86_64 architecture, while the compute plane machines utilize zCX (s390x/IBM Z) architecture.  This mixed multi-architecture capability is supported starting from Red Hat OpenShift version 4.14.0. on zCX Foundation for Red Hat OpenShift.  The integration of multi-architecture support underscores the platform's commitment to adaptability and empowers users to harness the strengths of different architectures within a unified Red Hat OpenShift Cluster environment.  This exciting feature opens up new possibilities for versatility and optimization for composite solutions that spans multiple architecture (x86_64 and s390x) containerized application deployments.

Use Case

This groundbreaking feature empowers clients to harness their current Red Hat OpenShift cluster, facilitating the deployment of workloads on the s390x architecture with both high availability and co-location advantages—all without the need for a separate s390x Red Hat OpenShift cluster installation.

In this blog, we'll showcase the seamless process of integrating one IBM zCX for OpenShift compute instances into an existing x86 Red Hat OpenShift Cluster as a day 2 operation.  This demonstration will encompass any essential modifications to the infrastructure configuration.

Prerequisites

  • IBM zCX APAR OA65756
  • Red Hat CoreOS s390x binaries version 4.14.0 and above
  • Existing x86 Red Hat OpenShift cluster with version 4.14.0 and above
  • Ignition file for x86 Red Hat OpenShift cluster compute node (worker.ign) – Refer to step 4 below for the dynamic retrieval process
  • Ability to update existing DNS and Load Balancer entries for an additional zCX for OpenShift compute instance.
  • A file server (HTTP/HTTPs) hosting Red Hat CoreOS binaries and the ignition file, accessible by the zCX compute instances.
  •  zCX DVIPA network address that is configured to ensure bidirectional reachability by the existing OpenShift cluster nodes, DNS server, and the load balancer.

These prerequisites lay the foundation for the integration of zCX for OpenShift compute instances into the existing x86 Red Hat OpenShift cluster, ensuring compatibility, accessibility, and effective communication between different cluster components.  As we proceed with the implementation, these requirements will serve as key elements for a seamless and robust deployment.

Step-by-Step Guide

  • Verifying Cluster Version

Before proceeding with any integration or upgrade, it's imperative to confirm that your x86 OpenShift Container Platform (OCP) cluster is running the required version.  Execute the following command: ’$ oc version’ and ensure that the server version reported is 4.14.* or a higher version.  This verification ensures that your cluster meets the minimum version criteria, setting the foundation for subsequent steps in the integration process.  Accurate version checking is pivotal for a smooth and successful execution of the overall operation:

A screenshot of a computer

Description automatically generated

  • Enabling Multi-Arch Support

Before you can add s390x zCX for OpenShift compute instances to your cluster, you must upgrade your Red Hat OpenShift cluster to one that uses the multi-architecture payload.  For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines [1].

Verify that no multi-arch-related information is present in the output, as Red Hat OpenShift does not enable multi-arch by default by issuing the command ‘$ oc adm release info -o json | jq .metadata.metadata’ and check there is no multi-arch related information:

Since our x86 Red Hat OpenShift cluster has already been upgraded to version 4.14.2, ready for the multi-arch payload, this command ‘$ oc adm upgrade --to-multi-arch’ triggers the initiation of multi-arch support. 

The cluster will begin downloading the necessary additional architecture payloads during this process.  This crucial upgrade paves the way for the integration of s390x zCX for OpenShift compute instances, unlocking the capabilities of a multi-architecture Red Hat OpenShift environment.  The successful execution of these commands ensures that your cluster is well-prepared for the subsequent steps in the deployment process:

  • Verifying Multi-Arch Support

After successfully downloading the multi-architecture payload, it's crucial to confirm that your cluster is now equipped with multi-arch support.  We can verify by reissuing the command from the step #2 `$ oc adm release info -o json | jq .metadata.metadata` and inspect the output contains the message of "release.openshift.io/architecture": "multi".

The presence of this message indicates that the cluster has been successfully upgraded and is now multi-arch support enabled.  This verification step ensures that your OpenShift cluster is ready to incorporate s390x zCX for OpenShift compute instances and effectively leverage the benefits of a multi-architecture environment:

 

  • Cluster Upgrade (Optional)

Optionally, you have the flexibility to choose between two methods for upgrading the cluster: the CLI or the web console.  The upgrade process remains consistent with upgrading a single architecture OpenShift cluster.  Follow either of the methods outlined below:

Execute the appropriate CLI command ‘$ oc adm upgrade --to=<desired-version>’ for updating the cluster. (Replace <desired-version> with the target version)

Navigate to the web console and follow the upgrade steps provided in the user interface.  The web console offers a user-friendly approach to managing cluster upgrades.

Whichever method you choose, ensure that the cluster upgrade is completed successfully before proceeding to the next steps in joining the IBM zCX for OpenShift compute instances.  This optional step ensures that your Red Hat OpenShift cluster is running the latest version, enhancing stability, security, and compatibility with multi-architecture support.

  • Obtaining the Compute Machine Ignition File

To facilitate the provision and integration of the IBM zCX for OpenShift compute instance, the compute node ignition file is essential.  Follow these steps to ensure you have the required file:

    • Using the Original Compute Machine Ignition File:

If the original compute node ignition file is accessible, you can directly use it for provisioning and joining the IBM zCX for OpenShift compute instance.

    • Generating a Fresh Compute Machine Ignition File:

In case the original file is inaccessible, run the following command to obtain a fresh copy: ‘$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign

This command extracts the necessary information to generate the compute node ignition file. Once generated, upload the file to the designated file server (HTTP/HTTPs) along with corresponding Red Hat CoreOS binaries.  Ensure that the file server is accessible by the IBM zCX for OpenShift compute instance.

Having a valid and up-to-date compute node ignition file is crucial for the seamless provisioning and integration of the IBM zCX for OpenShift compute instance into your existing Red Hat OpenShift cluster:A screen shot of a computer

Description automatically generated

  • Configuring DNS Entries and Load Balancer

To ensure proper communication and accessibility for the new IBM zCX for OpenShift compute instance(s), follow these steps to add entries in the DNS server for both forward and reverse lookup.  Additionally, configure the Load Balancer as needed.

    • DNS Entries Configuration:

Refer to the "Example DNS configuration for user-provisioned clusters" [4] for guidance on adding entries for the new zCX for OpenShift compute instance(s).   This includes both forward and reverse lookup entries.  Adjust the configurations based on your specific network and DNS setup.

    • Load Balancer Configuration:

Consult the "Example load balancer configuration for user-provisioned clusters" [5] for instructions on configuring the Load Balancer.  This ensures that network traffic is appropriately distributed and reaches the zCX for OpenShift compute instance(s).

Ensure that these configurations align with your network infrastructure and are tailored to accommodate the integration of the new IBM zCX for OpenShift compute instance(s).  Proper DNS and Load Balancer settings are crucial for seamless communication within the OpenShift cluster.

  • Provisioning of zCX for OpenShift Compute Instance(s)

At this stage, all the necessary preparations have been completed to provision and integrate the IBM zCX for OpenShift compute instance(s).  To gain detailed insights into the provisioning workflow and proceed with the cluster provisioning process, refer to the "z/OS Management Facility workflows for zCX for OpenShift" documentation [6].

This documentation serves as a comprehensive guide, offering in-depth information on the workflows involved in the provisioning process.  It covers critical aspects of the integration, ensuring a smooth and well-informed deployment of the IBM zCX for OpenShift compute instances within your Red Hat OpenShift cluster.

    • Compute Node Provisioning:
      • Specify the node type as "compute-node" when initiating the zCX for OpenShift z/OSMF provisioning workflow (ocp_provision.xml).
      • Ensure that the correct ignition file for the Compute instance is specified during the z/OSMF workflow execution(worker.ign).
      • Input the remaining parameters for the zCX provisioning workflow.  This may include configuration details, networking information, or any other relevant specifications based on your cluster requirements: A screenshot of a computer

Description automatically generated

  • Joining the multi-architecture Red Hat OpenShift Cluster

Upon the successful bring up of zCX for OpenShift Compute instance(s), follow these steps to validate its attempt to join the existing x86 Red Hat OpenShift cluster and obtain the necessary certificate approvals. Issue the ‘$ oc get csr’ command and inspect the output to ensure that the certificate requestor corresponds to the newly provisioned and joining zCX for OpenShift compute instance, and the certificate status is indicated as "pending":

Manually approve the pending certificate(s) using the following command: ‘$ oc adm certificate approve csr-xxxx’ (Replace csr-xxxx with the specific certificate request identifier):

By following these steps, you facilitate the secure and authenticated integration of the zCX for OpenShift compute instance(s) into the existing multi-architecture OpenShift cluster.  Manual approval of certificates ensures that OpenShift cluster administrator’s involvement enhancing security and control over the joining process.

  • Verifying Compute Node(s) Integration and Operation

After the successful approval of certificates, ensure the seamless integration and operational status of the new zCX for OpenShift compute instance(s) within the OpenShift cluster by following these steps:

Execute the following command to check if the new compute node(s) appear in the output: ‘$ oc get node -L kubernetes.io/arch’:

A screen shot of a computer program

Description automatically generated

Monitor the output to confirm that the new compute node(s) transition to a "READY" state, indicating successful integration.  The architecture should be specified as s390x.

Verify the state of all cluster operators using the following command: ‘$ oc get co –watch’:

A screenshot of a computer program

Description automatically generated

Continuously monitor the "STATUS" column to observe real-time updates.  Wait until all cluster operators are reported as "Available."  This signifies that the new zCX for OpenShift compute instances has completed joining the existing Red Hat OpenShift cluster and is ready for deploying workloads.

References

  1. Migrating to a cluster with multi-architecture compute machines: https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/migrating-to-multi-payload.html#migrating-to-multi-payload
  2. Updating of the cluster using the CLI: https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating-cluster-cli.html#updating-cluster-cli
  3. Updating the cluster using the web console: https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating-cluster-web-console.html#updating-cluster-web-console
  4. Example DNS configuration for user-provisioned clusters: https://docs.openshift.com/container-platform/4.14/installing/installing_bare_metal/installing-bare-metal.html#installation-dns-user-infra-example_installing-bare-metal
  5. Example load balancer configuration for user-provisioned clusters: https://docs.openshift.com/container-platform/4.14/installing/installing_bare_metal/installing-bare-metal.html#installation-load-balancing-user-infra-example_installing-bare-metal
  6. z/OS Management Facility workflows for zCX for OpenShift:https://www.ibm.com/docs/en/zcxrhos/latest?topic=zos-management-facility-workflows-zcx-openshiftd

#IBMZ #IBMz/OS #zCX #RedHatOpenShift #Multi-Arch #hybrid-cloud #Software