Red Hat OpenShift - Group home

Installing Red Hat OpenShift Container Platform with User-Provisioned Infrastructure via IBM Cloud Infrastructure Center

By Gerald Hosch posted Wed June 02, 2021 07:53 AM

  

Abstract

IBM® Cloud Infrastructure Center is an advanced infrastructure management offering, including on-premises cloud deployments of IBM z/VM® based Linux® virtual machines on the IBM Z® and IBM LinuxONE platforms. The article is to introduce how to use Cloud Infrastructure Center to install Red Hat® OpenShift® Container Platform with user-provisioned infrastructure (UPI).
Written by Chen Ji.

Objective

Cloud Infrastructure Center 1.1.3 and 1.1.4 support the provisioning of Red Hat Enterprise Linux CoreOS and users can leverage this function to install Red Hat OpenShift 4.6, 4.7, 4.8, and 4.9. The installation with UPI is described in the official Red Hat OpenShift installation documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/installing_on_ibm_z_and_linuxone/installing-on-ibm-z.

This blog describes an alternative method. The virtual machines are created through the Cloud Infrastructure Center user interface (UI). Cloud Infrastructure Center users with an admin role, project manager role, or self-service role are able to install a cluster. And the method described here does not need FTP and HTTP servers. As well, Cloud Infrastructure Center can help to manage and monitor the virtual machines.

Architecture Overview
image 8428

Picture 1

Picture 1 shows an overview of the architecture that Cloud Infrastructure Center is using to create Red Hat OpenShift (OCP) clusters. The compute-, image-, network-, identity-, and block storage services are used during the cluster creation steps.

Why Using Cloud Infrastructure Center to do UPI

Automated Red Hat CoreOS provisioning:

  • Cloud Infrastructure Center provides a flexible UI and it requires minimal z/VM skills to do the z/VM virtual machine provisioning.
  • Cloud Infrastructure Center provides industry widely used OpenStack compatible API, which can be consumed by tools such as Red Hat Ansible®, Red Hat CloudForms®, Terraform, or VMware. It's eligible to write Ansible scripts to orchestrate the whole Red Hat OpenShift cluster creation steps.
  • Cloud Infrastructure Center supports affinity and anti-affinity rules dispatch the CoreOS virtual machine into different compute node automatically

Flexible and easier life-cycle management of Red Hat OpenShift nodes:

  • Cloud Infrastructure Center has multi-tenancy support. you can create multiple Red Hat OpenShift clusters through one Cloud Infrastructure Center instance.
  • Technically it's feasible to do autoscale (based on Cloud Infrastructure Center monitor related services) 

No requirement for DHCP, FTP services for simple cluster creation

Requirements and planning

We highly recommend reading these docs before planning and install:

Environment configuration

In this document, we are using the IPs and hostnames at Table 1:

Domain name

example.com

Cluster name

openshift

<bootstrap nodes DNS name> <IP> <ignition file>

bootstrap.openshift.example.com   172.26.104.30 bootstrap.ign

<master nodes

DNS name> < IP >

< ignition file>

master-0.openshift.example.com   172.26.104.31 master.ign

master-1.openshift.example.com   172.26.104.32 master.ign

master-2.openshift.example.com   172.26.104.33 master.ign

<worker nodes

DNS name> <IP> <ignition file >

worker-0.openshift.example.com   172.26.104.34 worker.ign

worker-1.openshift.example.com   172.26.104.35 worker.ign

Table 1


$ cat /var/named/openshift.example.com.zone 
$TTL 900

@                     IN SOA bastion.openshift.example.com. hostmaster.openshift.example.com. (
                        2019062002 1D 1H 1W 3H
                      )
                      IN NS bastion.openshift.example.com.

bastion               IN A 172.26.104.63
api                   IN A 172.26.104.63
api-int               IN A 172.26.104.63
apps                  IN A 172.26.104.63
*.apps                IN A 172.26.104.63

bootstrap           IN A 172.26.104.30

master-0              IN A 172.26.104.31
master-1              IN A 172.26.104.32
master-2              IN A 172.26.104.33

worker-0              IN A 172.26.104.34
worker-1              IN A 172.26.104.35
worker-2              IN A 172.26.104.36

etcd-0              IN A 172.26.104.31
etcd-1              IN A 172.26.104.32
etcd-2              IN A 172.26.104.33

_etcd-server-ssl._tcp IN SRV 0 10 2380 etcd-0.openshift.example.com.
                      IN SRV 0 10 2380 etcd-1.openshift.example.com.
                      IN SRV 0 10 2380 etcd-2.openshift.example.com.

Code 1

Configuration file requirement

The Red Hat OpenShift installer provides Ignition configs that are used to configure the Red Hat Enterprise Linux CoreOS-based bootstrap and control plane machines by using bootstrap.ign and master.ign respectively. The Red Hat OpenShift installer also provides worker.ign that can be used to configure the Red Hat Enterprise Linux CoreOS-based worker machines. The Ignition configs files are generated based on install-config files. In our example, we download the openshift-installer from openshif installer 4.6.8 (we use 4.6.8 as example here and for 4.7 you need down corresponding version) unzip and use it.

Customize install-config.yaml

The OpenShift installer uses an install-config.yaml file to drive all installation time configurations.

Note: the baseDomain and metadata.name, must align with the DNS server. They must be identical to the configuration of cluster name and domain name at DNS service and Load balancer service. For example, in our example, the baseDomain is `example.com` and the cluster name is `openshift`, so all bootstrap, master, and worker nodes must have `openshift.example.com` as suffix.

Code 2 provides an example of the configuration file:

$ cat install-config.yaml
apiVersion: v1
baseDomain: example.com
compute:
- architecture: s390x
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 3
controlPlane:
  architecture: s390x
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 3
metadata:
  creationTimestamp: null
  name: openshift
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  none: {}
publish: External
pullSecret: xxx
sshKey: xxx

Code 2

If you do not have direct internet access, you can check cluster-wide proxy and set it

Generate Ignition Config

Refer to ignition creation steps in understanding how to create ignition files. Code 3 is an example in creating the ignition config files.

$ mkdir test-target
$ cp install-config.yaml test-target/
$ ./openshift-install create ignition-configs --dir test-target
INFO Consuming Install Config from target directory 
INFO Ignition-Configs created in: test-target and test-target/auth 

Code 3

Note: if you see this error message: "Unable to connect to the server: x509: certificate expired or is not yet valid: current time xxx is after yyy", your ignition is older than 24 hours and become invalid. 


Cloud Infrastructure Center preparation

Download Red Hat Enterprise Linux CoreOS image from Red Hat Openshift release site then upload to Cloud Infrastructure Center

In our example, download Red Hat Enterprise Linux CoreOS image by referring to table 2 from Red Hat OpenShift official website to your local disk and decompress it. Then, upload the image to Cloud Infrastructure Center. You need to set the correct operating system (Red Hat Enterprise Linux CoreOS 4), disk type(DASD or SCSI), and corresponding disk format listed at table 2. 

Images to be download

Red Hat OpenShift 4.6

Red Hat OpenShift 4.7

Red Hat OpenShift 4.8

Red Hat OpenShift 4.9

Disk format

z/VM (SCSI)

rhcos-4.6.8-s390x-metal.s390x.raw.gz

rhcos-4.7.7-s390x-metal.s390x.raw.gz

rhcos-4.8.2-s390x-metal.s390x.raw.gz

rhcos-4.9.0-s390x-metal.s390.raw.gz

RAW

z/VM (DASD)

rhcos-4.6.8-s390x-dasd.s390x.raw.gz

rhcos-4.7.7-s390x-dasd.s390x.raw.gz

rhcos-dasd.s390x.raw.gz

rhcos-dasd.s390x.raw.gz

RAW

KVM (Qcow2)

N/A

(not supported)

rhcos-4.7.7-s390x-openstack.s390x.qcow2.gz

rhcos-openstack.s390x.qcow2.gz

rhcos-4.9.0-s390x-openstack.s390x.qcow2.gz

QCOW2

Table 2

Note: While we are using DASD to do the deployment in our example, Picture 2 shows an example for a SCSI image

Refer to upload images for how to upload image to Cloud Infrastructure Center.

image 8418

Picture 2

After you successfully create the Red Hat Enterprise Linux CoreOS image, it looks like Picture 3:

image-20210201111149-17

Picture 3

Prepare and configure network: 

The virtual machines for the cluster are created with a static IP address. You must configure the network connectivity between virtual machines to enable the communication of the cluster components. The DNS server for the network is required for UPI, and it has to be set up to the DNS Red Hat OpenShift requirements section.

Make sure you create or edit the network configuration with DNS settings. In Picture 4, for example, `172.26.104.63` is configured to the DNS server of the network `flat01`. Each virtual machine provisioned by using network `flat01` takes 172.26.104.63 as the DNS server. For more information about network settings, refer to work with networks.

IMPORTANT: For KVM, you must enable the DHCP settings otherwise the Red Hat Enterprise Linux CoreOS created encounters network connection issue.

image-20210201111149-18

Picture 4 

Upload the bootstrap ignition and create bootstrap ignition shim

The generated bootstrap ignition file tends to be large (around 300KB -- it contains all the manifests, master and worker ignitions, and so on.). It is generally too large to be passed to the virtual machine directly (the user data limit of Cloud Infrastructure Center created virtual machine is 64KB so it's not able to pass the ignition directly).

There are a couple of ways to pass the ignition file. You can use an external http server and let virtual machine access the http server, or you can leverage Cloud Infrastructure Center image service to obtain the bootstrap ignition through bootstrap ignition shim. In our example, we create a smaller Ignition shim file that is passed to Cloud Infrastructure Center created virtual machine's user data, then the shim file downloads the main ignition file automatically when it is used in deployed Red Hat Enterprise Linux CoreOS boot up process.

IMPORTANT: The bootstrap.ign contains sensitive information such as your clouds.yaml credentials. It is not be accessible by the public! It is  only used once during the boot of the Bootstrap server. We strongly recommend you to restrict the access to that server and delete the file afterwards.

In this blog, we upload the bootstrap ignition file to Cloud Infrastructure Center image services.

Upload the bootstrap.ign file to Cloud Infrastructure Center image services:

Create the <image_name> image and upload the bootstrap.ign file:

$ source /opt/ibm/icic/icicrc root
Please input the password of root:
$ openstack image create --disk-format=raw --container-format=bare --file test-target/bootstrap.ign bootstrap.ign

Code 4

From Cloud Infrastructure Center UI, you can see that the image is uploaded successfully with ID marked in red rectangle at Picture 5:

image-20210201111149-19

Picture 5 

Note: Make sure the created image has active status as shown at picture 5

Get bootstrap shim image file URL 

We need to get the image location that we uploaded, use the following command to obtain the URL and save the URL .
Code 5 is an example of how to get the image file URL:

$ openstack image show bootstrap.ign -c file
+-------+------------------------------------------------------+
| Field | Value                                                |
+-------+------------------------------------------------------+
| file  | /v2/images/db4774c8-2d16-4320-82d7-52acd5ab9292/file |
+-------+------------------------------------------------------+

Code 5

Get image service URL:

We need to get the Cloud Infrastructure Center image service URL so that the Red Hat Enterprise Linux CoreOS virtual machine ignition process knows where to retrieve the bootstrap.ign ignition file. There are three endpoints (admin, internal, public) existing and in Cloud Infrastructure Center, they are identical so select any of them.
Code 6 is an example of how to obtain the image service URL:

$ openstack catalog show image
+-----------+---------------------------------------------+
| Field     | Value                                       |
+-----------+---------------------------------------------+
| endpoints | RegionOne                                   |
|           |   admin: https://m5404-integ-contro:9292    |
|           | RegionOne                                   |
|           |   internal: https://m5404-integ-contro:9292 |
|           | RegionOne                                   |
|           |   public: https://m5404-integ-contro:9292   |
|           |                                             |
| id        | 2fef6ff5f6954904b9eadfef8c9001c3            |
| name      | glance                                      |
| type      | image                                       |
+-----------+---------------------------------------------+

Code 6

Obtain the token to access the ignition file

By default Cloud Infrastructure Center image service doesn't allow anonymous access to the images(data). So, a valid auth token is needed in the ignition file. The token can be obtained with the command shown:

$ openstack token issue -c id -f value
gAAAAABgD9WPjqwaChzfrxw7izzaSzqHG52n-FVNA5j8y6dVxYdNLj1rp6Va6q9XMhNPmJUwFNWn1ogEbOIRCX-Ffyek2x8w1t3JURj8GThKafeUteUACdaYFNxV_cS-O5TwMayP5mOUE0-B4_OXotlQLIqWjbsE1CuFE0TIy1kKDU0lbg-FQ20t3RfS6wARRc18-t68FVN0D797F9es8V_TAOmEyQapjXUbN2dWwdUMkd2D3dWMBj4

Code 7 

Note: the token can be generated by any OpenStack user with image service read access and this particular token is only used for downloading the ignition file. 

The token is combined with its name `X-Auth-Token` and put into ignition

"httpHeaders": [
        {
                "name": "X-Auth-Token",
                "value": "<token that we obtained in the command>"
        }
]

Obtain CA certificate contents

CA certificate contents is necessary because it's needed in communicating with Cloud Infrastructure Center through https. The ignition file. The content is at file `/etc/pki/tls/certs/icic.crt` of management node, as showed at Code 8.

$ env | grep OS_CACERT
OS_CACERT=/etc/pki/tls/certs/icic.crt
$ openssl x509 -in $OS_CACERT | base64 -w0
<base64 output >

Code 8

Create bootstrap shim file

Code 9 is a bootstrap shim file sample file:

{"ignition":{"config":{"merge":[{"source":"https://m5404-integ-contro:9292/v2/images/746ec20c-7d86-46b5-b661-80009cf0eff8/file","httpHeaders":[{"name":"X-AUTH-Token","value":"xxxxxxxxxxxx"}]}]},"security":{"tls":{"certificateAuthorities":[{"source":"data:text/plain;charset=utf-8;base64,<base64 toke> "}]}},"version":"3.1.0"}}

Code 9

The sample is in following format:

{"ignition":{"config":{"merge":[{"source":"<Image endpoint URL><bootstrap shim file URL>","httpHeaders":[{"name":"X-AUTH-Token","value":"<auth token>"}]}]},"security":{"tls":{"certificateAuthorities":[{"source":"data:text/plain;charset=utf-8;base64,<CA cert base64 contents>"}]}},"version":"3.1.0"}}

Where

  • <Image endpoint URL>:  The URL got from step `Get image service URL:`
  • <bootstrap shim file URL>: The URL got from step `Get bootstrap shim image file URL `
  • <auth toke>: The token get from step `Obtain the token to access ignition file`
  • <CA cert base64 contents>: The contents get from step `Obtain the CA certificate contents`

Tips:

  • Get console output from UI to see the Red Hat Enterprise Linux CoreOS boot logs and take actions needed
  • In case you see `get ignition file: 'utf-8' codec can't decode byte 0xa0 in position 2: invalid start byte`, the bootstrap shim might contain blanks and you need to delete them, there can not contain blanks in the ignition shim files.


Use Cloud Infrastructure Center to install Cluster

In this step, we uses the Red Hat Enterprise Linux CoreOS image we uploaded to Cloud Infrastructure Center to deploy the bootstrap, master, and worker nodes.

  • Refer to Picture 6 by creating the virtual machines.
  • Follow table 1 to input IP address for each of the nodes. The network is set to static IP.
  • Follow table 1 to input ignition file for each of the nodes 
  • The flavor (compute template) meets the minimum requirements for the machines of the cluster, by default Cloud Infrastructure Center creates a few flavors and in this sample, `Medium` is used.

Picture 6 is an example for one node creation process, if you have a 3 master + 2 worker node environment. You need perform 6 times for 1 bootstrap, 3 master and 2 workers with its corresponding input.

image-20210201111149-20

Picture 6                    

Monitoring the bootstrap for completeness

The administrators can use the `$ openshift-install --dir test-target wait-for bootstrap-complete ` target of the Red Hat OpenShift Installer to monitor the cluster bootstrapping. The command succeeds when it notices bootstrap-complete event from Kubernetes apiserver. This event is generated by the bootstrap machine after the Kubernetes apiserver is bootstrapped on the control plane machines. You can refer to Create cluster for further information.
image-20210201111149-21

Picture 7

You must add DNS record of `api.openshift.example.com` to IP of bastion machine at the machine for running openshift-install.

# cat /etc/hosts
....

172.26.104.63 api.openshift.example.com
Then, start the installation program:
$ ./openshift-install wait-for bootstrap-complete --dir test-target/
INFO Waiting up to 20m0s for the Kubernetes API at https://api.openshift.example.com:6443... 
INFO API v1.19.0+7070803 up                       
INFO Waiting up to 30m0s for bootstrapping to complete... 
INFO It is now safe to remove the bootstrap resources 
INFO Time elapsed: 4m13s   

Code 10 

After the bootstrap completed, the bootstrap nodes can be removed. In our example, there are the 3 master nodes and 2 worker nodes shown in Picture 7 and Code 10

You can also use the Red Hat OpenShift command-line interface (CLI) to check the Red Hat OpenShift status. CLI can help to create applications and manage Red Hat OpenShift projects from a terminal, see: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/cli_tools/index

Export the kubeadmin credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig

Approve the worker CSRs

Follow the manual provisioning steps to approve the CSRs 

Check operator status

After approval of the CSR and waiting for a few minutes, all operators become Available = True, PROGRESSING=False and DEGRADED=False as shown Code 11:

$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.8     True        False         False      2m20s
cloud-credential                           4.6.8     True        False         False      111m
cluster-autoscaler                         4.6.8     True        False         False      103m
config-operator                            4.6.8     True        False         False      104m
console                                    4.6.8     True        False         False      9m7s
csi-snapshot-controller                    4.6.8     True        False         False      103m
dns                                        4.6.8     True        False         False      102m
etcd                                       4.6.8     True        False         False      40m
image-registry                             4.6.8     True        False         False      35m
ingress                                    4.6.8     True        False         False      12m
insights                                   4.6.8     True        False         False      104m
kube-apiserver                             4.6.8     True        False         False      38m
kube-controller-manager                    4.6.8     True        False         False      101m
kube-scheduler                             4.6.8     True        False         False      101m
kube-storage-version-migrator              4.6.8     True        False         False      12m
machine-api                                4.6.8     True        False         False      102m
machine-approver                           4.6.8     True        False         False      103m
machine-config                             4.6.8     True        False         False      101m
marketplace                                4.6.8     True        False         False      41m
monitoring                                 4.6.8     True        False         False      12m
network                                    4.6.8     True        False         False      104m
node-tuning                                4.6.8     True        False         False      104m
openshift-apiserver                        4.6.8     True        False         False      35m
openshift-controller-manager               4.6.8     True        False         False      101m
openshift-samples                          4.6.8     True        False         False      35m
operator-lifecycle-manager                 4.6.8     True        False         False      103m
operator-lifecycle-manager-catalog         4.6.8     True        False         False      103m
operator-lifecycle-manager-packageserver   4.6.8     True        False         False      21m
service-ca                                 4.6.8     True        False         False      103m
storage                                    4.6.8     True        False         False      104m

Code 11               

Configuring storage for the image registry

For more configurations for the image registry go here

Monitoring the cluster installation for completeness

The administrators can use the `$ openshift-install --dir test-target wait-for install-complete ` target of the Red Hat OpenShift Installer to monitor the cluster installation completion. The command succeeds when it notices that Cluster Version Operator is completed rolling out the Red Hat OpenShift cluster from Kubernetes. Refer to Completing installation for further information.
Code 12 is an example of running the command, refer here for further information:

$ ./openshift-install wait-for install-complete --dir test-target 
INFO Waiting up to 40m0s for the cluster at https://api.openshift.example.com:6443 to initialize...               
INFO Waiting up to 10m0s for the openshift-console route to be created...           
INFO Install complete!                            
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/upi-demo/test-target/auth/kubeconfig' 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift.example.com 
INFO Login to the console with user: "kubeadmin", and password: "sAH7y-DmvfS-uoAH4-W6rHh"                
INFO Time elapsed: 51s

Code 12

Related information

0 comments
12 views