This blog describes using the Red Hat OpenShift Assisted Installer to deploy a Single Node OpenShift (SNO) cluster on an IBM Cloud Bare Metal Server for Virtual Private Cloud (VPC). While the IBM Cloud Bare Metal Server for VPC is not a supported option by Red Hat yet, the speed and ease of deployment makes an ideal demo or proof of concept deployment of OpenShift Virtualization and Migration Toolkit for Virtualization.
Red Hat OpenShift Virtualization
Red Hat OpenShift Virtualization provides a modern platform for organizations to run and deploy new and existing virtual machine (VM) workloads. It is a feature of Red Hat OpenShift which allows for easy management of traditional VM workloads on a trusted, consistent, and comprehensive hybrid cloud application platform. OpenShift Virtualization offers a path for infrastructure modernization, taking advantage of the simplicity and speed of a cloud-native application platform and aims to preserve existing virtualization investments while embracing modern management principles.
Migration Toolkit for Virtualization
OpenShift Virtualization includes a simple way to migrate VMs from other hypervisors with the Migration Toolkit for Virtualization. Migrations are performed in a few simple steps, first by providing source and destination credentials, then mapping the source and destination infrastructure and creating a choreographed plan, and finally, executing the migration effort.
Single Node OpenShift cluster
The Single Node OpenShift (SNO) cluster is a configuration that consists of a single control plane node that is configured to run workloads on it. This configuration allows users to deploy a smaller OpenShift footprint making it a useful solution for resource-constrained environments, demos, proof of concepts, or even on-premises deployments. It is important to keep in mind that SNO lacks high availability, so it may not be suitable for all your workloads
OpenShift Assisted Installer
The OpenShift Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure. The installer can be used to deploy:
-
Highly available OpenShift Container Platform or single-node OpenShift cluster.
-
Optionally, OpenShift Virtualization and Red Hat OpenShift Data Foundation.
OpenShift Assisted Installer provides hosting of installation artifacts such as ignition files, installation configuration, discovery ISOs. The installer is exposed as a UI or REST API.
IBM Cloud Bare Metal Server for VPC
One of the key features of a bare metal server on VPC, is that you can select to network boot an operating system over the network. In our deployment we use this feature to boot the server from the artifacts hosted in the Assisted Installer.
Deployment toplogy
The diagram below shows the deployment topology of the resources needed:
After deployment it can be further modified to suit your requirements including the following:
-
Connecting the VPC to a TGW to connect to existing VPCs or Classic resources including VCF on Classic instances.
-
Adding client-server VPNs to enable access over a VPN.
-
Configuring Linux or OVS bridges to connect to VPC subnets.
Deployment process overview
The deployment is a five step process:
-
Use the OpenShift Assisted Installer to create a SNO.
-
Create a userdata file.
-
Use the IBM UI, CLI or API to create the following resources:
4. Return to the OpenShift Assisted Installer to complete the installation of the cluster and retrieve the credentials.
5. Make the storage class default
Use the OpenShift Assisted Installer to create a SNO
-
Using a browser, naviagte to console.redhat.com and login.
-
Select OpenShift.
-
Select Create Cluster.
-
Select Data Center then Create Cluster:
-
Enter a Cluster name and a Base domain:
-
Select the single node OpenShift (SNO) option then select Next:
-
Select Install OpenShift Virtualization, which automatically selects Install Logical Volume Manager Storage, then select Next:
-
Select Add Host:
-
Paste your Public SSH key and select Generate Discovery ISO:
-
Copy the Discovery ISO URL:
Create a userdata file
-
In a text editor create a file called userdata containing the following commands, replacing <YOUR_DISCOVEY_ISO_URL> with the URL you copied from the console above.
#!ipxe
:retry_dhcp
dhcp || goto retry_dhcp
sleep 2
ntp time.adn.networklayer.com
sanboot <YOUR_DISCOVEY_ISO_URL>
-
Save the file.
Use the IBM UI, CLI or API to create the VPC resources
-
Using the IBM UI, CLI or API create the following resources:
-
-
Resource group.
-
VPC with a prefix.
-
Subnet with a public gateway for the bare metal server.
-
Subnet with a public gateway for the VM networks.
-
Security group for the bare metal server.
-
Upload a public SSH key.
-
Floating IP.
-
A virtual network interfaces.
-
A bare metal server.
-
When creating the security group:
-
When creating the bare metal server, use the clidk to import iPXE script and select the userdata file created in the previous step:
Ensure that you use a profile that includes multiple local disks. The first disk will be used for the boot disk while the others will be consumed by LVM for VM storage.
Return to the OpenShift Assisted Installer to complete the installation of the cluster and retrieve the credentials
-
Wait for the bare metal server to boot and enter the running state, and then return to the Redd Hat OpenShift Assisted Installer. You should see that the Host inventory is populated with the server details.
-
Scroll down the page and select Next:
3. Scroll down the page, as everything can be left as deafults and select Next.
-
On the next page, scroll down and select Install Cluster:
-
The OpenShift Asssited Installer will now install all the cluster components onto the VPC bare metal server:
-
While the cluster is being installed edit your /etc/hosts file to include the Floating IP address and the following
<FLOATING_IP_ADDRESS> api.<CLUSTER_NAME>.<BASE_DOMAIN>
<FLOATING_IP_ADDRESS> oauth-openshift.apps.<CLUSTER_NAME>.<BASE_DOMAIN>
<FLOATING_IP_ADDRESS> console-openshift-console.apps.<CLUSTER_NAME>.<BASE_DOMAIN>
<FLOATING_IP_ADDRESS> grafana-openshift-monitoring.apps.<CLUSTER_NAME>.<BASE_DOMAIN>
<FLOATING_IP_ADDRESS> thanos-querier-openshift-monitoring.apps.<CLUSTER_NAME>.<BASE_DOMAIN>
<FLOATING_IP_ADDRESS> prometheus-k8s-openshift-monitoring.apps.<CLUSTER_NAME>.<BASE_DOMAIN>
<FLOATING_IP_ADDRESS> alertmanager-main-openshift-monitoring.apps.<CLUSTER_NAME>.<BASE_DOMAIN>
For example with:
-
Cluster Name: demo-01
-
Base domain: demo.cloud
The hosts file would contain the following:
-
After approximately 15 minutes the cluster install should be complete and you can get the credentails and click on the URL:
-
Your browser will take you to the cluster. You will need to confirm that you want to visit this site as the cluster has been installed with self-signed certificates. You will need to confirm twice, as you get redirected:
-
Log into the cluster using the credentials:
Make the storage class default
-
While LVM has been installed, and a storage class has been configured, the storage class has not been marked as default. Therefore, the images for the templates cannot be downloaded. To mark the storage class as default we will use the oc CLI
-
To download the oc CLI, return to the RedHat Hybrid Console https://console.redhat.com/openshift/downloads and download oc
-
Follow the instaructions to install oc for your OS using the instrauctions at https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html
-
To use oc you will need to login. At the cluster console UI, select kube:admin at the top right and select Copy login command
-
You will be redirected to a page where you can copy the line under Log in with this Token :
-
Log into the cluster with OC using the line copied above, an enter the following:
oc patch storageclass lvms-vg1 -p '{"metadata": {"annotations": {"storageclass. kubernetes.io/is-default-class": "true"}}}'
You should see a confirmation as shown below:
-
You will see the storage class has been marked as default:
-
If you navigate to Virtualization and Templates, you will see that boot source has been automatically imported and the sopurce is available:
You are now ready to provision a VM.
Post-deployment
After the initial deployment, consider:
Useful resources
Read the following for further information:
Coming up
In the next articles we will be looking at:
-
Using a script to provision the IBM Cloud resources.
-
Expanding the script to use the OpenShift Installer API.
-
Deploying a consolidated cluster acrosss three availability zones.
-
Virtual machine networking.