IBM Z and LinuxONE - IBM Z

IBM Z

The enterprise platform for mission-critical applications brings next-level data privacy, security, and resiliency to your hybrid multicloud.

 View Only

How to Install Red Hat OpenShift on IBM Cloud Infrastructure Center on IBM Z and IBM LinuxONE

By Dibya Raj posted 4 hours ago

  

How to Install Red Hat OpenShift on IBM Cloud Infrastructure Center on IBM Z and IBM LinuxONE

Author: @Dibya Raj

Abstract

This blog details how to automate Red Hat OpenShift deployments on IBM® Cloud Infrastructure Center using Ansible for both ZVM and KVM. As an IaaS solution, ICIC simplifies infrastructure setup, whereas OpenShift, a PaaS solution, allows for containerized application deployment and management on user-provided infrastructure. This guide provides step-by step procedures and reference materials for deploying and managing the Red Hat Ansible Automation Platform Operator on OpenShift Container Platform 4.17 and later versions.

Introduction

IBM® Cloud Infrastructure Center is an advanced infrastructure management product that facilitates the on-premises deployment of both containerized and non-containerized workloads on IBM z/VM and Red Hat KVM-based virtual machines, running on IBM Z and IBM LinuxONE platforms. This blog shows how Red Hat OpenShift Container Platform runs on IBM Cloud Infrastructure Center (ICIC) and uses Ansible playbooks to automate deployment via a user-provisioned infrastructure (UPI).

Architecture Overview

End-to-End OpenShift deployment Workflow on ICIC using Ansible

Prerequisites: Before beginning the setup, make sure the following requirements are met. 

Pre-step 1:  Ensure that IBM Z or IBM LinuxONE is running a properly configured and functional instance of IBM® Cloud Infrastructure Center, version 1.2.3 or higher.
For more details, refer to the official IBM Cloud Infrastructure Center documentation.

Pre-step 2:  A Bastion server, a machine that is used to configure DNS and Load Balancer for the Red Hat OpenShift installation. If you have external or existing DNS server, but no-Load Balancer for the Red Hat Openshift installation, please set os_dns_domain property, and then use a separate YAML configure-haproxy to configure the HAProxy in bastion server. If you have existing Load Balancer, but no DNS server for the Red Hat Openshift installation, you can use a separate YAML configure-dns to configure the DNS server in bastion server. If you don't have any existing DNS server or Load Balancer for the Red Hat Openshift installation, you need to create one Linux server as the bastion server and run playbook to configure DNS server and Load Balancer. You can also use the same Linux server that runs Ansible.

If you want to deploy multiple Red Hat OpenShift, please do not use the same bastion server to configure multiple Load Balancer, otherwise you may encounter x509 error.

The firewalld service should be enabled and running in bastion server. Install Python3 in the bastion server.

Pre-step 3: You'll need to have the Red Hat OpenShift Subscription Manager set up in advance.

Red hat Access Request

  •        Need to request access at this link:  https://ftp3.linux.ibm.com/myaccount/
  •        Select Red hat checkbox.
  •        Request will be sent to the manager for the approval.
  •        Set Password: In the link above link, select Password tool (on left side of screen)
  •        Enter this command on terminal: wget --user <w3 id > --ask-password -O ibm-rhsm.sh  ftp://ftp3.linux.ibm.com/redhat/ibm-rhsm.sh (enter w3id/ email and         password of red hat)
  •        After that download “ibm-rhsm.sh” file and keep it for future usage.

   A screenshot of a computer

AI-generated content may be incorrect.

Preparing the RHEL (Ansible Client) Environment

Step 1: Transfer the ibm-rhsm.sh Script. Begin by transferring the ibm-rhsm.sh script to your RHEL machine (Ansible client) using scp.

From your local machine (e.g., Mac): scp ibm-rhsm.sh root@<your_rhel_server_ip>: ~

Note: Replace <your_rhel_server_ip> with the actual IP address of your RHEL server.

Step 2: Execute the Script on the RHEL Machine

  •          SSH into your RHEL server: ssh root@<your_rhel_server_ip>
  •          Make the script executable and run it: chmod +x ibm-rhsm.sh
  •          sudo bash ~/ibm-rhsm.sh --register 

·         Enter your Red Hat account credentials (email and password) when prompted.

Step 3: Upgrade System Packages and Reboot              

  •          yum upgrade -y
  •          reboot now

 Note: If the yum upgrade fails due to package conflicts, resolve them with:  sudo dnf remove -y python3-kombu python3-amqp python3-webob python3-neutron-lib

 Step 4: Setup the Correct Python and Ansible Environment

 Conflicts may arise if multiple Python versions are present and Ansible references the wrong one. To avoid this remove older or conflicting packages:

  • yum remove python36
  • yum remove ansible

·          Install required packages: 

  •       yum install ansible-core -y
  •       yum install python3.12-devel python3.12-wheel

·         Verify versions:  ansible --version; python --version

 You should see output like:

 Ansible [core 2.16.3]

 Python 3.12.8A screenshot of a computer

AI-generated content may be incorrect.  

 Step 5: Sometimes, the system might point to older Python version by default. To fix this and set Python 3.12 as by default, follow these steps:

  •  alternatives --display python
  •  sudo alternatives --set python /usr/bin/python3.12
  • python --version
  • Confirm the output shows: Python3.12.x
  •  Install pip for Python 3.12
  •  yum install 3.12-pip

 Step 6: Install Required Python Packages

  •  Install the essential OpenStack and Ansible dependencies:
  •        python -m pip install openstacksdk cryptography oslo.utils netaddr \
  •        python-openstackclient oslo. i18n python-keystoneclient python-novaclient \
  •        stevedore osc-lib cliff pbr iso8601

Step 7: Install Required Ansible collections:

  •        ansible-galaxy collection install ansible.posix --force
  •        ansible-galaxy collection install openstack.cloud –force

Step 8: As part of the OpenShift configuration, you will need:

  •        An SSH key (generated on the bastion server)
  •        A pull secret (retrieved from the OpenShift Portal )

Generating an SSH key on the Bastion HostA black screen with white text

AI-generated content may be incorrect.

Once generated, add the public key (usually located at ~/.ssh/id_rsa.pub) to your inventory.yaml file under the appropriate section.

Step 9: Configure ICIC Credentials on Ansible Client

 To interact with IBM® Cloud Infrastructure Center from your ansible client, you will need to ensure the environment is clean and properly set up with the required       credentials and certificates. Clean up and set up ICIC Directory: Begin by removing any previous ICIC configuration, if it exists, and then recreate the directory:

  •            [root@kvm4ocp2 ~] # rm -rf /opt/ibm/icic
  •            [root@kvm4ocp2 ~] # mkdir -p /opt/ibm/icic

·          Copy ICIC Environment and Certificate Files: Securely copy the required files from the ICIC management node to your Ansible client

  • [root@kvm4ocp2 ~] # scp root@<icic_ip_addess>:/opt/ibm/icic/icicrc /opt/ibm/icic/
  • root@kvm4ocp2 ~] # scp root@<icic_ip_address>:/etc/pki/tls/certs/icic.crt /etc/pki/tls/certs/

·.      Source the ICIC Environment file:

  •        [root@kvm4ocp2 ~] # source /opt/ibm/icic/icicrc   

This ensures your shell session can interact with OpenStack CLI and modules using ICIC credentials.

 Retrieve the Subnet ID: To fetch the subnet ID associated with a network, run the following command from the Ansible client, and then use in inventory.yaml.

  •  [root@kvm4ocp2 ~]# openstack subnet list

 Config 1: Start by configuring your inventory.yaml file and environment-specific settings to deploy a Red Hat OpenShift cluster on z/VM or KVM. The deployment                     should include a minimum of 3 master nodes and 3 compute nodes.

Config 2: Ensure that the following parameters are added to any task that uses the   OpenStack module: cloud: devstack ca_cert: "{{icic_cert}}" Do changes in all the below files.

roles/configure-bootstrap-kvm/tasks/main.yaml

A computer screen shot of white text

AI-generated content may be incorrect.

Config 3: Ensure that the openshift_rhcos variable is pointing to the correct RHCOS image URL for your OpenShift version (e.g., 4.18.1 on s390x). This image is required for booting the nodes during the provisioning process.

roles/configure-installer-rhcos/tasks/main.yaml

Config 4: To allow Ansible and OpenStack modules to interact with your ICIC environment, you'll need to create a clouds.yaml file and define your OpenStack cloud configuration inside it.

auth_url: https://<icic_ip>: port/api_version

This file is referenced by the Ansible OpenStack modules via the cloud parameter (cloud: devstack)

Config 5: Update 01_preparation. yaml with Certificate and Cloud Settings



Config 6:

For k/VM, enable DHCP and set the DNS nameserver to the KVM subnet ID, using the bastion ip as the DNS. This is not needed for z/VM networks.

[root@lldeng-reg-standalone ~] # openstack subnet set --no-dns-nameservers <kvm_subnet_id>

[root@lldeng-reg-standalone ~] # openstack subnet set --dns-nameserver <bastion_ip> --dhcp <kvm_subnet_id>

[root@lldeng-reg-standalone ~] # openstack subnet show <kvm_subnet_id>

Once all configurations are in place, it's time to run the Ansible playbooks to begin the provisioning process.

Step1:

[root@kvm4ocp2 ocp_upi] # ansible-playbook -i inventory. yaml 01-new_preparation.yml. After a successful run, you should see the Bootstrap ignition file and the RHCOS image file should appear in ICIC UI. This confirms that the environment has been prepared correctly and ready for further provisioning.

Step2:

[root@kvm4ocp2 ocp_upi] # ansible-playbook -i inventory. yaml bastion. yaml --ask-pass

(Since the Bastion server is running on a separate host, Ansible will prompt for the SSH password)

SSH password: {Provide here your Bastion server password here}

After a successful run, you should see output indicating that the Bastion node has been configured correctly and is ready for cluster provisioning.

Step3:

Now run the playbook that provisions the OpenShift control plane nodes on ICIC: 

[root@kvm4ocp2 ocp_upi] # ansible-playbook -i inventory. yaml 02-new_create-cluster-control. yaml

This playbook deploys the following VMs on IBM Cloud Infrastructure Center: master0, master1, master2. Once the masters are created, the bootstrap node begins its boot-up process.
A screenshot of a computer program

AI-generated content may be incorrect.

Step4: Once the control plane is up and the bootstrap process is complete, run the following playbook to deploy the compute (worker) nodes:

            root@kvm4ocp2 ocp_upi] # ansible-playbook -i inventory. yaml 03-new_create-cluster-compute. Yaml

This playbook provisions all the compute nodes and completes the OpenShift cluster setup.

After a successful run, you will be provided with the OpenShift web console URL and the kubeadmin login credentials.

To access your OpenShift cluster from your MacBook, you will need to login using the Bastion server's IP address and your cluster name.

abc@abcs-Macbook-Pro ~ % vi /etc/hosts

<bastion_ip>    console-openshift-console. apps. <cluster_name>. ocp.com

<bastion_ip>    oauth-openshift. apps. <cluster_name>. ocp.com

Verifying Node Status via oc CLI:  You can use the oc command-line tool to verify the status of both Control and Compute nodes. The nodes should appear in the      ready state once the OpenShift cluster is up and running successfully.

           [root@bastion_server ~] # ./oc get nodes

Network Debugging Insights (Real-world Issues and Fixes)

During our OpenShift deployment on IBM Cloud Infrastructure Center (ICIC), one of the most recurring pain points was network misconfiguration, especially related to the Bastion node. While the Ansible playbooks generally handle this setup well, there were instances where the playbooks failed silently, leading to issues that were hard to diagnose.

To help others avoid similar roadblocks, here’s a detailed account of what went wrong and how we resolved it.

Services to Monitor on the Bastion Server

The Bastion node acts as both the DNS and load balancer for the OpenShift cluster, so its setup must be correct. The following services need to be running:

  •        named (DNS service)
  •        haproxy (Load balancer)
  •        firewalld (Firewall management)

Once your VMs are successfully deployed on ICIC, test name resolution from the Bastion:

              [root@bastion_server ~] # nslookup bootstrap

              Server: <bastion_ip>

              Address: <bastion_ip>#port

              Name: bootstrap. <cluster_name>. ocp.com

             Address: <bootstrap_ip>

You should get the correct VM IP addresses and cluster DNS names. If the IPs returned don't match what's provisioned in ICIC, it’s likely a sign that the environment wasn’t cleaned up properly before provisioning.

Clean-Up and Recovery (If VM IPs Don’t Match)

If inconsistencies appear, you should:

1.        Run the destroy playbook to reset the environment:

ansible-playbook -i inventory.yaml 04-destroy.yaml --ask-pass

This will:  Remove all VMs from ICIC

Clean up DNS and HAProxy configuration

Delete related inventory JSON files                                                                                                     

Remove any stale DNS and HAProxy artifacts manually: rm -f dvkvmicic-*. Json

Competing DNS Services (When Bastion is on a k/VM Host)

It’s always not recommended to use kvm server as bastion server. When using the k/VM host as the Bastion, a common issue is conflicting DNS services. The host might already be running its own DNS (e.g., dnsmasq) which clashes with the named service configured for OpenShift.

To resolve this:

1.        Stop conflicting services:

  •         systemctl stop dnsmasq
  •         systemctl disable dnsmasq
  •        systemctl stop haproxy; systemctl stop named

2.        Reboot the Bastion Server.

3.     After reboot, ensure that only the correct DNS and HAProxy services are running:

  •        systemctl start named
  •        systemctl start haproxy
  •        systemctl status dnsmasq   # Should be inactive

4.      Confirm DNS looks clean and correctly configured

 [root@bastion_server ~] # netstat -tulpn

 Active Internet connections (Only servers)

  •       DNS zone file path: /var/named/<domain.name. file>
  •       HAProxy config path: /etc/haproxy/haproxy.cfg

Note: These issues may not occur every time, but when they do, they are difficult to track down without proper visibility. Including this step in your debugging checklist can save hours of troubleshooting.

Conclusion: This blog explains how to install Red Hat OpenShift on IBM Cloud Infrastructure Center using a user-provisioned infrastructure (UPI) method. It provides a clear, automated approach for deploying OpenShift on IBM Z and IBM LinuxONE platforms, helping streamline operations and improve efficiency.

0 comments
2 views

Permalink