Infrastructure as a Service

 View Only

Using a Script with the OpenShift Installer API

By Neil Taylor posted 6 days ago

  

Using a bash script with the OpenShift Installer API

In the first blog Explore OpenShift Virtualization with a Single Node OpenShift Cluster on IBM Cloud Virtual Private Cloud I described using the Red Hat OpenShift Assisted Installer to deploy a Single Node OpenShift (SNO) cluster on an IBM Cloud Bare Metal Server for Virtual Private Cloud (VPC). While the IBM Cloud Bare Metal Server for VPC is not a supported option by Red Hat yet, the speed and ease of deployment makes an ideal demo or proof of concept deployment of OpenShift Virtualization and Migration Toolkit for Virtualization.

In the second blog Using a Script to deploy OpenShift Virtualization with a Single Node OpenShift Cluster on IBM Cloud Virtual Private Cloud, I described a simple script that uses the IBM Cloud CLI to deploy the IBM Cloud resources needed for the SNO deployment.

The flow of the tasks was as follows

  1. Use the OpenShift Installer UI to configure a cluster and retrieve the iPXE content.
  2. Use the script to provision IBM Cloud VPC resources.
  3. Use the OpenShift Installer UI to install the cluster.

In this blog we will look at using two additional scripts that use the OpenShift Installer API. Using these scripts mean that you do not use the OpenShift Installer directly. The new flow will be as follows:

  1. The script calls another script that uses the OpenShift Installer API to configure a cluster and retrieve the iPXE content.
  2. The script provisions IBM Cloud VPC resources.
  3. The script calls another script that uses the OpenShift Installer API to install the cluster.

Overview

The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list:

  • Interactive - Using the web-based Assisted Installer is an ideal approach for clusters with networks connected to the internet and is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios.
  • Local Agent-based - Deploys a cluster on disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first.
  • Automated - The installation program uses each cluster host’s baseboard management controller (BMC) for provisioning.
  • Full control - Deploys a cluster on infrastructure that you prepare and maintain, which provides maximum customizability.

We are focusing on the web-based Assisted Installer, see Installing OpenShift Container Platform with the Assisted Installer and in particular Installing with the Assisted Installer API. Also see API for the documentation of the API.

The script in the previous blog is amended to call two new scripts; RegisterSNO.sh and InstallSNOCluster.sh. The RegisterSNO.sh script is called first and creates a new cluster:

# Create the cluster on the assisted installer

if [[ $create_userdata_file = "yes" ]]

then

print_message "Running RegisterSNO.sh to register the cluster on the assisted installer and get the iPXE file"

. ./RegisterSNO.sh

if [[ $? -eq 1 ]]

then

print_message "RegisterSNO.sh failed"

exit 1

fi

else

print_message "Not running RegisterSNO.sh to register the cluster on the assisted installer and get the iPXE file"

fi

The InstallSNOCluster.sh script is called second, after all the IBM Cloud VPC resources have been provisioned:

# Start the Cluster install

if [[ $create_cluster = "yes" ]]

then

. ./InstallSNOCluster.sh

else

print_message "Not installing the cluster"

fi

Offline Token

The offline token is needed to interact with the Assisted Installer API. This token can be download from the Assisted Installer web console. See the screenshots below that show how to get the token. The documentation uses offline token and API Token interchangeably, but this is the offline token.

Collect the token, via the clipboard and paste it into the following terminal command `export OFFLINE_TOKEN=<copied_offline_token>

API Token

API calls require authentication with the API token. The offline token to used to get an API token.

Pull Secret

Many of the Assisted Installer API calls require the pull secret. The pull secret is downloaded from the Assisted Installer and placed in a file, see the the screenshot below that shows how to get the pull secret.

The pull secret is placed in a file so that it can be referenced in API calls.

RegisterSNO.sh script

The first part of the script checks that the offline token has been set, the pull secret file and SSH public key file is accessible:

#!/bin/bash

# This script should not be run directly, but be called from CreateSNO.sh

# DO NOT CHANGE ANYTHING BELOW

print_message "Running RegisterSNO.sh"

assisted_service_api_base="https://api.openshift.com/api/assisted-install"

# Check to see that there is a variable called $OFFLINE_TOKEN and it is populated

if [ -z "$OFFLINE_TOKEN" ]

then

echo $OFFLINE_TOKEN

echo "You need to define your Openshift offline token with OFFLINE_TOKEN=\"<PASTE_API_TOKEN_HERE>\" before running this script"

exit 1

fi

print_message "OFFLINE_TOKEN is defined"

# Check that the the pull file is in the CWD

if [ ! -f ./pull-secret.txt ]

then

echo "File pull-secret.txt not found in the current working directory"

exit 1

fi

print_message "The file pull-secret.txt is in the current working directory"

pull_secret=$(cat pull-secret.txt | jq -R .)

# Check the SSH public key file exists

if [ ! -f $my_public_ssh_key_path ]

then

echo "SSH public key file $my_public_ssh_key_path not found"

exit 1

fi

print_message "The SSH public key file exists"

cluster_sshkey=$(cat $my_public_ssh_key_path | jq -R .)

The script uses the curl with the --write-out option to collect the status code for error checking as well as the response. An example of the format of the result of the API call is shown below:

{

"access_token": "<redacted>",

"expires_in": 900,

"refresh_expires_in": 0,

"refresh_token": "<redacted>",

"token_type": "Bearer",

"id_token": "<redacted>",

"not-before-policy": 0,

"session_state": "<redacted>",

"scope": "openid api.iam.service_accounts roles web-origins offline_access"

}

200

The return code can be retrieved from $result by using return_code="${result:${#result}-3}" which gets the last 3 characters of $result by getting the total number of characters in $result with ${#result}.

The body of the response is retrieved from $result by using body="${result:0:${#result}-3}. The access_token is retrieved from the body using jq.

Using the offline token an API key is obtained by using the following:

# Get an access token using the OFFLINE_TOKEN

# The result from curl contains the body and the http_code e.g. {body here}200

print_message "Getting an API token"

result=$(curl \

--silent \

--write-out "%{http_code}\n" \

--header "Accept: application/json" \

--header "Content-Type: application/x-www-form-urlencoded" \

--data-urlencode "grant_type=refresh_token" \

--data-urlencode "client_id=cloud-services" \

--data-urlencode "refresh_token=${OFFLINE_TOKEN}" \

"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token"

)

return_code="${result:${#result}-3}"

if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi

if [[ $return_code -ne "200" ]]; then print_message "Failed: Curl command failed with code $return_code"; exit 1; fi

api_token=$(echo $body | jq --raw-output '.access_token')

print_message "Success: Got an API token"

The next stage is to define a cluster in a temporary json file and then use this file in a `clusters` API POST call. From the response body the ID of the cluster is obtained using jq:

# Define a SNO cluster deployment with Virtualization and LVM

print_message "Defining a SNO cluster deployment with Virtualization and LVM"

cat << EOF > ./sno-deployment.json

{

"name": "$my_cluster_name",

"base_dns_domain": "$my_base_domain",

"openshift_version": "$target_version",

"cpu_architecture": "x86_64",

"high_availability_mode": "None",

"olm_operators": [

{"name": "cnv"},

{"name": "lvm"}

],

"ssh_public_key": $cluster_sshkey,

"pull_secret": $pull_secret

}

EOF

result=$(curl \

--silent \

--write-out "%{http_code}\n" \

--data @./sno-deployment.json \

--header "Content-Type: application/json" \

--header "Authorization: Bearer $api_token" \

--request POST \

"$assisted_service_api_base/v2/clusters"

)

return_code="${result:${#result}-3}"

if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi

if [[ $return_code -ne "201" ]]; then print_message "Failed: Curl command failed with code $return_code. Body $body"; exit 1; fi

cluster_id=$(echo $body | jq --raw-output '.id')

print_message "Success: Defined a SNO cluster deployment with Virtualization and LVM with cluster ID $cluster_id"

rm -f ./sno-deployment.json

Once the cluster has been defined, it is registered as an infrastructure environment using the cluster ID captured previously:

# Register the SNO cluster deployment with Virtualization and LVM as an infrastructure environment

print_message "Registering the SNO cluster deployment with Virtualization and LVM as an infrastructure environment"

cat << EOF > ./infra-envs.json

{

"name": "$my_cluster_name",

"pull_secret": $pull_secret,

"ssh_authorized_key": $cluster_sshkey,

"image_type": "full-iso",

"cluster_id": "${cluster_id}",

"openshift_version": "$cluster_version"

}

EOF

result=$(curl \

--silent \

--write-out "%{http_code}\n" \

--data @./infra-envs.json \

--header "Content-Type: application/json" \

--header "Authorization: Bearer $api_token" \

--request POST \

"$assisted_service_api_base/v2/infra-envs"

)

return_code="${result:${#result}-3}"

if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi

if [[ $return_code -ne "201" ]]; then print_message "Failed: Curl command failed with code $return_code. Body $body"; exit 1; fi

infra_env_id=$(echo $body | jq --raw-output '.id')

print_message "Success: Registered the SNO cluster deployment with Virtualization and LVM with infrastructure environment ID $infra_env_id"

rm -f ./infra-envs.json

Now that the cluster is registered as an infrastructure environment the iPXE script can be downloaded and saved as a file:

# Download the ipxe script as a file

print_message "Downloading the iPXE script as a file"

curl \

--silent \

--header "Authorization: Bearer $api_token" \

"$assisted_service_api_base/v2/infra-envs/$infra_env_id/downloads/files?file_name=ipxe-script" > $sno_userdata_file

if [ ! -f ./$sno_userdata_file ]; then print_message "Failed: File $sno_userdata_file not downloaded to the current working directory"; exit 1; fi

print_message "Success: Downloaded the iPXE script to $sno_userdata_file"

```

The iPXE script file needs to be modified to pre-pend the IBM Cloud VPC specific lines:

:retry_dhcp

dhcp || goto retry_dhcp

sleep 2

ntp time.adn.networklayer.com

```bash

# Modify the iPXE script to include the dhcp retries and the ntp server

print_message "Modifying the iPXE script to include the dhcp retries and the ntp server"

sed -i.bak '2i\

:retry_dhcp\

dhcp || goto retry_dhcp\

sleep 2\

ntp time.adn.networklayer.com

' $sno_userdata_file

print_message "Success: Modified the iPXE script $sno_userdata_file for use with IBM Cloud bare metal servers for VPC"

The script now ends and returns to the CreateSNO.sh script for provisioning of the IBM Cloud VPC resources.

InstallSNOCluster.sh script

Once the IBM Cloud VPC resources have been provisioned the InstallSNOCluster.sh script is started. The script gets a new API token. See the previous section on how this is achieved. A function named `waiting` is defined which is a rudimentary progress bar:

#!/bin/bash

# This script should not be run directly, but be called from CreateSNO.sh

# DO NOT CHANGE ANYTHING BELOW

# Function to print a waiting..... progress. Pass number of seconds to wait as the first argument and the message as the 2nd

waiting () {

sleep_time=$1

waiting_msg=$2

sleep_loop=$sleep_time

progress=""

dots=""

while [[ $sleep_loop -ne 0 ]]

do

printf "%150s\\r"

progress="($sleep_loop/$sleep_time)"

printf "$waiting_msg$dots$progress\\r"

dots="$dots."

sleep 1

sleep_loop=$((sleep_loop-1))

done

printf "%150s\\r"

}

print_message "Running InstallSNOCluster.sh"

assisted_service_api_base="https://api.openshift.com/api/assisted-install"

# Check to see that there is a variable called $OFFLINE_TOKEN and it is populated

if [ -z "$OFFLINE_TOKEN" ]

then

echo $OFFLINE_TOKEN

echo "You need to define your Openshift offline token with OFFLINE_TOKEN=\"<PASTE_API_TOKEN_HERE>\" before running this script"

exit 1

fi

print_message "OFFLINE_TOKEN is defined"

# Get an access token using the OFFLINE_TOKEN

# The result from curl contains the body and the http_code e.g. {body here}200

print_message "Getting an API token"

result=$(curl \

--silent \

--write-out "%{http_code}\n" \

--header "Accept: application/json" \

--header "Content-Type: application/x-www-form-urlencoded" \

--data-urlencode "grant_type=refresh_token" \

--data-urlencode "client_id=cloud-services" \

--data-urlencode "refresh_token=${OFFLINE_TOKEN}" \

"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token"

)

return_code="${result:${#result}-3}"

if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi

if [[ $return_code -ne "200" ]]; then print_message "Failed: Curl command failed with code $return_code"; exit 1; fi

api_token=$(echo $body | jq --raw-output '.access_token')

print_message "Success: Got an API token"

Once an API Token has been received, the Cluster ID is retrieved by using the cluster name as a filter. The cluster ID is used to in the cluster install API call.

# Get the ID of the cluster

result=$(curl \

--silent \

--write-out "%{http_code}\n" \

--header "Content-Type: application/json" \

--header "Authorization: Bearer $api_token" \

--request GET \

"$assisted_service_api_base/v2/clusters"

)

return_code="${result:${#result}-3}"

if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi

if [[ $return_code -ne "200" ]]; then print_message "Failed: Curl command failed with code $return_code"; exit 1; fi

cluster_id=$(echo $body | jq -r '.[] | select(.name == "'$my_cluster_name'" and .status == "ready") | .id')

print_message "Success: Cluster ID: $cluster_id"

# Install the cluster

result=$(curl \

--silent \

--write-out "%{http_code}\n" \

--header "Content-Type: application/json" \

--header "Authorization: Bearer $api_token" \

--request POST \

"$assisted_service_api_base/v2/clusters/$cluster_id/actions/install"

)

return_code="${result:${#result}-3}"

if [ ${#result} -eq 3 ]; then body=""; else body="${result:0:${#result}-3}"; fi

if [[ $return_code -ne "202" ]]; then print_message "Failed: Curl command assisted_service_api_base/v2/clusters/$cluster_id/actions/install failed with code $return_code. Response: $body"; return; fi

print_message "Success: Cluster install started"

The script now uses the `waiting` function, defined earlier, to loop repeatedly the cluster hosts API call to get the status using jq:

# Check the install reaches done

host_not_done=1

sleep_time_seconds=30

while [ $host_not_done -ne 0 ] ; do

waiting $sleep_time_seconds "Waiting for cluster install"

if [[ $host_status != "Done" ]]; then

host_status=$(curl \

--silent \

--header "Content-Type: application/json" \

--header "Authorization: Bearer $api_token" \

--request GET \

"$assisted_service_api_base/v2/clusters/$cluster_id/hosts" \

| jq -r '.[] | .status_info')

fi

host_not_done=0

if [[ $host_status = "Done" ]] ; then

printf "Host status $host_status\\r"

else

printf "Host status $host_status\\r"

((host_not_done++))

fi

sleep 2

printf "%150s\\r"

sleep 1

done

Once the host status is `Done`, the script ends returning to the CreateSNO.sh script for the final task of printing out details to the user

Previously

In the previous articles we looked looking at:

Coming up

In the next articles we will be looking at:

  • Deploying a consolidated cluster across three availability zones.
  • irtual machine networking.
0 comments
7 views

Permalink