Using a bash script with the OpenShift Installer API
In the first blog Explore OpenShift Virtualization with a Single Node OpenShift Cluster on IBM Cloud Virtual Private Cloud I described using the Red Hat OpenShift Assisted Installer to deploy a Single Node OpenShift (SNO) cluster on an IBM Cloud Bare Metal Server for Virtual Private Cloud (VPC). While the IBM Cloud Bare Metal Server for VPC is not a supported option by Red Hat yet, the speed and ease of deployment makes an ideal demo or proof of concept deployment of OpenShift Virtualization and Migration Toolkit for Virtualization.
The flow of the tasks was as follows
- Use the OpenShift Installer UI to configure a cluster and retrieve the iPXE content.
- Use the script to provision IBM Cloud VPC resources.
- Use the OpenShift Installer UI to install the cluster.
In this blog we will look at using two additional scripts that use the OpenShift Installer API. Using these scripts mean that you do not use the OpenShift Installer directly. The new flow will be as follows:
- The script calls another script that uses the OpenShift Installer API to configure a cluster and retrieve the iPXE content.
- The script provisions IBM Cloud VPC resources.
- The script calls another script that uses the OpenShift Installer API to install the cluster.
Overview
The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list:
- Interactive - Using the web-based Assisted Installer is an ideal approach for clusters with networks connected to the internet and is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios.
- Local Agent-based - Deploys a cluster on disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first.
- Automated - The installation program uses each cluster host’s baseboard management controller (BMC) for provisioning.
- Full control - Deploys a cluster on infrastructure that you prepare and maintain, which provides maximum customizability.
The script in the previous blog is amended to call two new scripts; RegisterSNO.sh and InstallSNOCluster.sh. The RegisterSNO.sh script is called first and creates a new cluster:
# Create the cluster on the assisted installer
if [[ $create_userdata_file = "yes" ]]
print_message "Running RegisterSNO.sh to register the cluster on the assisted installer and get the iPXE file"
print_message "RegisterSNO.sh failed"
print_message "Not running RegisterSNO.sh to register the cluster on the assisted installer and get the iPXE file"
The InstallSNOCluster.sh script is called second, after all the IBM Cloud VPC resources have been provisioned:
# Start the Cluster install
if [[ $create_cluster = "yes" ]]
print_message "Not installing the cluster"
Offline Token
The offline token is needed to interact with the Assisted Installer API. This token can be download from the Assisted Installer web console. See the screenshots below that show how to get the token. The documentation uses offline token and API Token interchangeably, but this is the offline token.
Collect the token, via the clipboard and paste it into the following terminal command `export OFFLINE_TOKEN=<copied_offline_token>
API Token
API calls require authentication with the API token. The offline token to used to get an API token.
Pull Secret
Many of the Assisted Installer API calls require the pull secret. The pull secret is downloaded from the Assisted Installer and placed in a file, see the the screenshot below that shows how to get the pull secret.
The pull secret is placed in a file so that it can be referenced in API calls.
RegisterSNO.sh script
The first part of the script checks that the offline token has been set, the pull secret file and SSH public key file is accessible:
# This script should not be run directly, but be called from CreateSNO.sh
# DO NOT CHANGE ANYTHING BELOW
print_message "Running RegisterSNO.sh"
assisted_service_api_base="https://api.openshift.com/api/assisted-install"
# Check to see that there is a variable called $OFFLINE_TOKEN and it is populated
if [ -z "$OFFLINE_TOKEN" ]
echo "You need to define your Openshift offline token with OFFLINE_TOKEN=\"<PASTE_API_TOKEN_HERE>\" before running this script"
print_message "OFFLINE_TOKEN is defined"
# Check that the the pull file is in the CWD
if [ ! -f ./pull-secret.txt ]
echo "File pull-secret.txt not found in the current working directory"
print_message "The file pull-secret.txt is in the current working directory"
pull_secret=$(cat pull-secret.txt | jq -R .)
# Check the SSH public key file exists
if [ ! -f $my_public_ssh_key_path ]
echo "SSH public key file $my_public_ssh_key_path not found"
print_message "The SSH public key file exists"
cluster_sshkey=$(cat $my_public_ssh_key_path | jq -R .)
The script uses the curl with the --write-out option to collect the status code for error checking as well as the response. An example of the format of the result of the API call is shown below:
"access_token": "<redacted>",
"refresh_token": "<redacted>",
"id_token": "<redacted>",
"session_state": "<redacted>",
"scope": "openid api.iam.service_accounts roles web-origins offline_access"
The return code can be retrieved from $result
by using return_code="${result:${#result}-3}"
which gets the last 3 characters of $result
by getting the total number of characters in $result
with ${#result}
.
The body of the response is retrieved from $result
by using body="${result:0:${#result}-3}
. The access_token is retrieved from the body using jq.
Using the offline token an API key is obtained by using the following:
# Get an access token using the OFFLINE_TOKEN
# The result from curl contains the body and the http_code e.g. {body here}200
print_message "Getting an API token"
--write-out "%{http_code}\n" \
--header "Accept: application/json" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "grant_type=refresh_token" \
--data-urlencode "client_id=cloud-services" \
--data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token"
return_code="${result:${#result}-3}"
if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi
if [[ $return_code -ne "200" ]]; then print_message "Failed: Curl command failed with code $return_code"; exit 1; fi
api_token=$(echo $body | jq --raw-output '.access_token')
print_message "Success: Got an API token"
The next stage is to define a cluster in a temporary json file and then use this file in a `clusters` API POST call. From the response body the ID of the cluster is obtained using jq:
# Define a SNO cluster deployment with Virtualization and LVM
print_message "Defining a SNO cluster deployment with Virtualization and LVM"
cat << EOF > ./sno-deployment.json
"name": "$my_cluster_name",
"base_dns_domain": "$my_base_domain",
"openshift_version": "$target_version",
"cpu_architecture": "x86_64",
"high_availability_mode": "None",
"ssh_public_key": $cluster_sshkey,
"pull_secret": $pull_secret
--write-out "%{http_code}\n" \
--data @./sno-deployment.json \
--header "Content-Type: application/json" \
--header "Authorization: Bearer $api_token" \
"$assisted_service_api_base/v2/clusters"
return_code="${result:${#result}-3}"
if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi
if [[ $return_code -ne "201" ]]; then print_message "Failed: Curl command failed with code $return_code. Body $body"; exit 1; fi
cluster_id=$(echo $body | jq --raw-output '.id')
print_message "Success: Defined a SNO cluster deployment with Virtualization and LVM with cluster ID $cluster_id"
rm -f ./sno-deployment.json
Once the cluster has been defined, it is registered as an infrastructure environment using the cluster ID captured previously:
# Register the SNO cluster deployment with Virtualization and LVM as an infrastructure environment
print_message "Registering the SNO cluster deployment with Virtualization and LVM as an infrastructure environment"
cat << EOF > ./infra-envs.json
"name": "$my_cluster_name",
"pull_secret": $pull_secret,
"ssh_authorized_key": $cluster_sshkey,
"image_type": "full-iso",
"cluster_id": "${cluster_id}",
"openshift_version": "$cluster_version"
--write-out "%{http_code}\n" \
--data @./infra-envs.json \
--header "Content-Type: application/json" \
--header "Authorization: Bearer $api_token" \
"$assisted_service_api_base/v2/infra-envs"
return_code="${result:${#result}-3}"
if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi
if [[ $return_code -ne "201" ]]; then print_message "Failed: Curl command failed with code $return_code. Body $body"; exit 1; fi
infra_env_id=$(echo $body | jq --raw-output '.id')
print_message "Success: Registered the SNO cluster deployment with Virtualization and LVM with infrastructure environment ID $infra_env_id"
Now that the cluster is registered as an infrastructure environment the iPXE script can be downloaded and saved as a file:
# Download the ipxe script as a file
print_message "Downloading the iPXE script as a file"
--header "Authorization: Bearer $api_token" \
"$assisted_service_api_base/v2/infra-envs/$infra_env_id/downloads/files?file_name=ipxe-script" > $sno_userdata_file
if [ ! -f ./$sno_userdata_file ]; then print_message "Failed: File $sno_userdata_file not downloaded to the current working directory"; exit 1; fi
print_message "Success: Downloaded the iPXE script to $sno_userdata_file"
The iPXE script file needs to be modified to pre-pend the IBM Cloud VPC specific lines:
ntp time.adn.networklayer.com
# Modify the iPXE script to include the dhcp retries and the ntp server
print_message "Modifying the iPXE script to include the dhcp retries and the ntp server"
ntp time.adn.networklayer.com
print_message "Success: Modified the iPXE script $sno_userdata_file for use with IBM Cloud bare metal servers for VPC"
The script now ends and returns to the CreateSNO.sh script for provisioning of the IBM Cloud VPC resources.
InstallSNOCluster.sh script
Once the IBM Cloud VPC resources have been provisioned the InstallSNOCluster.sh script is started. The script gets a new API token. See the previous section on how this is achieved. A function named `waiting` is defined which is a rudimentary progress bar:
# This script should not be run directly, but be called from CreateSNO.sh
# DO NOT CHANGE ANYTHING BELOW
# Function to print a waiting..... progress. Pass number of seconds to wait as the first argument and the message as the 2nd
while [[ $sleep_loop -ne 0 ]]
progress="($sleep_loop/$sleep_time)"
printf "$waiting_msg$dots$progress\\r"
sleep_loop=$((sleep_loop-1))
print_message "Running InstallSNOCluster.sh"
assisted_service_api_base="https://api.openshift.com/api/assisted-install"
# Check to see that there is a variable called $OFFLINE_TOKEN and it is populated
if [ -z "$OFFLINE_TOKEN" ]
echo "You need to define your Openshift offline token with OFFLINE_TOKEN=\"<PASTE_API_TOKEN_HERE>\" before running this script"
print_message "OFFLINE_TOKEN is defined"
# Get an access token using the OFFLINE_TOKEN
# The result from curl contains the body and the http_code e.g. {body here}200
print_message "Getting an API token"
--write-out "%{http_code}\n" \
--header "Accept: application/json" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "grant_type=refresh_token" \
--data-urlencode "client_id=cloud-services" \
--data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token"
return_code="${result:${#result}-3}"
if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi
if [[ $return_code -ne "200" ]]; then print_message "Failed: Curl command failed with code $return_code"; exit 1; fi
api_token=$(echo $body | jq --raw-output '.access_token')
print_message "Success: Got an API token"
Once an API Token has been received, the Cluster ID is retrieved by using the cluster name as a filter. The cluster ID is used to in the cluster install API call.
# Get the ID of the cluster
--write-out "%{http_code}\n" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer $api_token" \
"$assisted_service_api_base/v2/clusters"
return_code="${result:${#result}-3}"
if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi
if [[ $return_code -ne "200" ]]; then print_message "Failed: Curl command failed with code $return_code"; exit 1; fi
cluster_id=$(echo $body | jq -r '.[] | select(.name == "'$my_cluster_name'" and .status == "ready") | .id')
print_message "Success: Cluster ID: $cluster_id"
--write-out "%{http_code}\n" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer $api_token" \
"$assisted_service_api_base/v2/clusters/$cluster_id/actions/install"
return_code="${result:${#result}-3}"
if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi
if [[ $return_code -ne "202" ]]; then print_message "Failed: Curl command assisted_service_api_base/v2/clusters/$cluster_id/actions/install failed with code $return_code. Response: $body"; return; fi
print_message "Success: Cluster install started"
The script now uses the `waiting` function, defined earlier, to loop repeatedly the cluster hosts API call to get the status using jq:
# Check the install reaches done
while [ $host_not_done -ne 0 ] ; do
waiting $sleep_time_seconds "Waiting for cluster install"
if [[ $host_status != "Done" ]]; then
--header "Content-Type: application/json" \
--header "Authorization: Bearer $api_token" \
"$assisted_service_api_base/v2/clusters/$cluster_id/hosts" \
| jq -r '.[] | .status_info')
if [[ $host_status = "Done" ]] ; then
printf "Host status $host_status\\r"
printf "Host status $host_status\\r"
Once the host status is `Done`, the script ends returning to the CreateSNO.sh script for the final task of printing out details to the user
Previously
In the previous articles we looked looking at:
Coming up
In the next articles we will be looking at:
- Deploying a consolidated cluster across three availability zones.
- irtual machine networking.