Red Hat OpenShift

Red Hat OpenShift

Red Hat OpenShift

Kubernetes-based container platform that provides a trusted environment to run enterprise workloads. It extends the Kubernetes platform with built-in software to enhance app lifecycle development, operations, and security

 View Only

Installing Red Hat OpenShift Container Platform 4.15 cluster with LPAR as a node on IBM Z and IBM LinuxONE

By Gerald Hosch posted Fri July 26, 2024 04:50 AM

  

Authors: K SHIVA SAI (shiva.sai.k1@ibm.com), Sanidhya (sanidhya.sanidhya@ibm.com) 

With the availability of Red Hat® OpenShift® 4.15, users have the possibility to deploy Red Hat OpenShift nodes directly onto LPARs on IBM Z® and IBM® LinuxONE, in addition to the options  using the hypervisors IBM z/VM® or Red Hat Enterprise Linux KVM. The LPAR option provides efficiency, helping to optimize performance of Red Hat OpenShift on the IBM Z and LinuxONE platforms.

Booting LPARs with Red Hat Enterprise Linux CoreOS and seamlessly attaching them to the Red Hat OpenShift cluster streamlines operations, improves the potential of the IBM Z and LinuxONE infrastructure within the Red Hat OpenShift landscape, and offers support for devices such as DASD and FCP, as well as for OSA, RoCE, and HiperSockets devices.

Hardware and software requirements for a ‘standard’ Red Hat OpenShift cluster

Minimum requirement for a ‘standard’ Red Hat OpenShift cluster is:

  • System designed to host installation images, manage load balancing, and handle DNS configurations.
  • One node or machine temporarily for the bootstrap.
  • Three control plane LPAR nodes to run the Red Hat OpenShift services that form the control plane.
  • At least two compute LPAR nodes to the user workloads.

Hardware and software requirements for 3 node Red Hat OpenShift cluster

  • Minimum requirement for 3 node Red Hat OpenShift cluster is:
  • System designed to host installation images, manage load balancing, and handle DNS configurations.
  • One node or machine temporarily for the bootstrap.
  • Three LPAR nodes that function as both compute and control units to run Red Hat OpenShift services.

Hardware and Software Requirements for ‘single-node OpenShift’ cluster

  • Minimum requirement for single-node OpenShift cluster is:
  • System designed to host installation images, manage load balancing, and handle DNS configurations.
  • One node or machine temporarily for the bootstrap.
  • A LPAR nodes that function as both compute and control unit to run Red Hat OpenShift services.

Steps to bring up the cluster with LPARs as Red Hat OpenShift nodes

Setting up the bastion

This pivotal node initiates interactions with the cluster, therefore a careful configuration is required the networking and load balancing. These setup procedures are vital for ensuring seamless communication and efficient distribution of workload requests, thus establishing a robust foundation for cluster management and operation.

DNS configuration

DNS configuration is crucial for enabling the bastion node to accurately resolve the IP addresses of target nodes based on incoming traffic. Likewise, it facilitates reverse resolution, allowing the bastion node to pinpoint the source of incoming requests. This bidirectional resolution process forms the backbone of efficient communication within the cluster, helping to ensure seamless navigation and optimal performance.

Example for forward DNS Configuration:

Consider cluster_name: ocp, base_domain: example.com 
Save the file with <cluster_name>.<base_domain>.zone (filename would be ocp.example.com.zone).

$TTL 900

@                     IN SOA bastion.ocp.example.com. hostmaster.ocp.example.com. (

                        2019062002 1D 1H 1W 3H

                      )

                      IN NS bastion.ocp.example.com.

bastion    IN A <bastion_ip>
api           IN A <bastion_ip>
api-int      IN A <bastion_ip>
apps         IN A <bastion_ip>
*.apps      IN A <bastion_ip>
bootstrap  IN A <bootstrap_ip>
master0    IN A <master0_ip>
master1    IN A <master1_ip>
master2    IN A <master2_ip>
worker0    IN A <worker0_ip>
worker1    IN A <worker1_ip>

Example for reverse DNS configuration:

Consider the IP addresses of all machines reside within the subnet 192.168.12.0/24.

Save the file with name as mentioned here 12.168.192.in-addr.arpa.zone.

$TTL 900

@ IN SOA bastion.ocp.example.com hostmaster.ocp.example.com. (

    2019062001 1D 1H 1W 3H

  )     

  IN NS bastion.m4206ocp.lnxne.boe.
<Last octet in bootstrap IP>    IN PTR bootstrap.ocp.example.com
<Last octet in master0 IP>      IN PTR master0.ocp.example.com.
<Last octet in master1 IP>      IN PTR master1.ocp.example.com.
<Last octet in master2 IP>      IN PTR master2.ocp.example.com.

<Last octet in worker0 IP>      IN PTR worker0.ocp.example.com.
<Last octet in worker1 IP>      IN PTR worker1.ocp.example.com.

Configuring the Load Balancer

HAProxy is an often used load balancer in Red Hat OpenShift clusters, offering robust capabilities for distributing incoming traffic across multiple nodes. It ensures high availability and scalability by routing requests to the most appropriate backend servers intelligently. HAProxy optimizes resource utilization, enhances performance, and maintains seamless operation, thus facilitating efficient and reliable service delivery.

See: Sample HAProxy configuration for IBM Z.

Generating the Ignition files

Download the Red Hat OpenShift client (oc client) and Red Hat OpenShift installer binaries from the client mirror then extract the tar files.

Create the install config as mentioned here: Sample Install config for IBM Z.

To generate the Kubernetes manifests and ignition files use the commands:

./openshift-install create manifests --dir <installation_directory>

./openshift-install create ignition-configs --dir <installation_directory>

Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the “./<installation_directory>/auth” directory:

.

├── auth

   ├── kubeadmin-password

   └── kubeconfig

├── bootstrap.ign

├── master.ign

├── metadata.json

└── worker.ign

Transfer the ignition files to an HTTP server, or alternatively, configure the bastion itself as an HTTP server.

Configuring HTTP server in bastion

Enable the HTTP server on the bastion node by installing the httpd package. Command to install the httpd package yum install –y httpd

Create a directory named "ignition" within the directory path /var/www/html. Subsequently, deposit the ignition file, generated through the execution of commands from the openshift-install binary, into the /var/www/html/ignition/ directory.

In addition to the ignition files, include the rootfs.img obtained from the OpenShift mirror repository in the /var/www/html directory.

The files stored in the /var/www/html directory can be accessed using the external IP address of the machine, followed by the HTTP port, typically 80 or 8080.
To access the master.ign file located in the /var/www/html/ignition directory on the bastion machine with the IP address 192.23.23.1 and HTTP port 80, simply use the following
URL: http://192.23.23.1:80/ignition/master.ign

Booting the nodes

Node booting is typically managed via the Hardware Management Console (HMC), while the necessary boot files are provided through an FTP server.

Critical files for booting include the ignition file, rootfs image (accessible via an HTTP server), kernel.img, initrd.img, genericdvd.prm, and initrd.addrsize. These files can be obtained by mounting the ISO image downloaded from the OpenShift mirror repository

Example of generic.ins file

* minimal lpar ins file
images/kernel.img 0x00000000
images/initrd.img 0x02000000
images/genericdvd.prm 0x00010480
images/initrd.addrsize 0x00010408

images/

├── genericdvd.prm

├── initrd.addrsize

├── initrd.img

└── kernel.img

Customize the genericdvd.prm file to align with specific preferences or requirements. Here's a basic example of how the genericdvd.prm file could be configured for compute node.
Organize all the above crucial files exactly as specified in the generic.ins file on the FTP server.

rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://bastion:8080/rhcos-4.15.0-rc.0-s390x-live-rootfs.s390x.img coreos.inst.ignition_url=http://bastion:8080/ignition/master.ign ip=<LPAR_IP>::<Gateway>:<Subnet_Mask>:<hostname_for_node>:<network_interface>:none nameserver=bastion_ip cio_ignore=all,!condev zfcp.allow_lun_scan=0 rd.znet=<network_card_details> rd.zfcp=<FCP_device>

See detailed information regarding the parm file here: ( Pram File detailed Info ).

Booting nodes from HMC

  • Ensure all essential files are properly organized on both the HTTP server and FTP server for efficient access and management.

  • Obtain the necessary access permissions for the HMC and select the LPAR to boot.

  • Choose the LOAD from ‘Removable Media’ option for the LPAR

  • Specify the FTP server as the data source and provide the corresponding credentials, selecting the FTP protocol.

  • After entering the credentials, the HMC prompts to select the generic.ins file. Choose the correct generic.ins file in the mentioned path and click ok.

  • Input the HMC password to initiate the boot procedure and click "OK" to start the boot procedures.

  • Choose the "Operating System Messages" option to view the boot logs.

  • For more detailed information regarding the boot procedure, refer to this resource HMC boot process.

The initiation of the Red Hat OpenShift bootstrap process occurs once the cluster nodes have successfully booted into the persistent Red Hat Enterprise Linux CoreOS environment stored on disk. This process relies on the configuration information provided by the Ignition config files to set up and deploy Red Hat OpenShift on the respective machines. It's essential to patiently await the completion of the bootstrap process.

Command to monitor the bootstrap process to be run on the bastion:
./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level=info

Accessing the cluster

To access the cluster, there are two options:

  • Setting the kubeconfig environment variable.
    • export KUBECONFIG=<installation_directory>/auth/kubeconfig
  • Copy the kubeconfig file to the /root/.kube/ directory.

As nodes are added to the cluster, they generate Certificate Signing Requests (CSRs). To review the status of CSRs, run the below command.
oc get csr -A
Ensure all CSRs are approved. If any remain pending approval, approve them using the command.
oc get csr -o name | xargs oc adm certificate approve

Verify the status of nodes by executing the command oc get no. Ensure that all nodes are in the "Ready" state.
Verify the status of cluster operators by executing the command oc get clusteroperators. Ensure that all cluster operators are available.

For debugging any installation failures with the Red Hat OpenShift cluster, please refer to the following information: comprehensive troubleshooting guidance.

Performance of LPAR Cluster

Workloads deployed on Red Hat OpenShift on LPAR can benefit from a higher efficiency and better performance. The improved efficiency and better performance come from a more lightweight architecture without a hypervisor layer. Overall, processes, I/O operations and networking can be processed more efficiently, and can result in higher throughput, lower response times and lower management overhead. In addition, clusters on LPAR integrate seamlessly with IBM Z and LinuxONE management tools and security features, providing a more efficient and secure infrastructure for modern computing.

Conclusion

In summary, deploying Red Hat OpenShift on LPARs as nodes offers a powerful solution for containerized environments. This approach not only benefits from the efficiency and scalability of LPARs but also ensures seamless integration with the IBM Z and LinuxONE infrastructure. With proper setup and configuration, Red Hat OpenShift installation on LPARs provides a reliable foundation for modern application deployment and management. 


0 comments
20 views

Permalink