Hi
In order to use the modern method to run ansible play-books with ansible-navigator for IBM Power Systems in PowerVC, I will use a custom collection created and a custom Ansible Execution Environment Image to run a site.yml to create VMs.
In the plays are jsonquery structures that can be used to extract more detailed information on a single VM from PowerVC openstack by parsing the JSON returned from the PowerVC cloud. In this blog I get managed host details of the IBM Power system to pass to the ibm.hmc module for shutdown and startup of the lpar.
Howevr, there is one issue with the format of the returned JSON for the managed system,
os_server_host: "{{servers | community.general.json_query(jmespath_host) | replace('[','' ) | replace(']','' ) | replace('9119MME_','Server-9119-MME-SN' )}}"
The information held in Openstack is not the same as the default managed system name given in the HMC. The dash is changed to an underscore, so I had to change it back to find the lpar.
The process uses DHCP allocation of IP addresses from the PowerVC pool. But once this is allocated, the MAC address can be extracted from OpenStack to subsequently boot a CoreOS node using a DHCP server on a Bastion node, for example.
This blog starts with just creating a single RHEL VM and then creating 3 RHEL VMs in a loop, run from a site.yml accepting group_vars/all/infra.yml environment variables and executed using a custom execution environment image and ansible-navigator.
The process demonstrates complete automation from one ansible play, managed by environment variable inputs in group_vars.
We do not want departments to maintain a workstation of many conflicting collections for many different requirements. So we create an Ansible Execution Environment Image that includes a custom collection and the required collection pre-requisites, which can run anywhere as a container.
Provisioning an automated solution in Ansible as a custom execution environment image , locks in the method as an immutable process from dev to prod.
Playbooks used to create the collection can be found here
ibellinfantie/modern: example playbooks for Modernization on IBM Power
Below is a play I use in this blog for creating a VM.
- name: Create a new VM in PowerVC
os_server:
state: present
auth:
auth_url: '{{ os_auth_url }}'
username: '{{ os_username }}'
password: '{{ os_password }}'
project_name: '{{ os_project_name }}'
user_domain_name: '{{ os_user_domain_name }}'
project_domain_name: '{{ os_project_domain_name }}'
timeout: 900
validate_certs: no
name: '{{ vm_name }}'
image: '{{ powervc_rhel_image }}'
flavor: '{{ worker_flavor }}'
nics:
- net-id: '{{ powervc_net_id }}'
userdata: |
{%- raw -%}#!/bin/bash
service sshd restart
{% endraw %}
register: vmout
- debug: var=vmout
tags: [ never, debug ]
There is also a play for stopping and starting lpars on IBM Power using the HMC Ansible collection.
Here we pass a server name as a variable, server_name. It's details can be searched after creation in the Openstack PowerVC cloud namespace. The openstack details for the namespace are in a JSON dictionary object named "servers".
[ansi01@controller modern]$ cat roles/ocp_bootstrap/tasks/hmc_startup_nodes.yml
---
- name: get the lpar details from openstack
set_fact:
os_server_name: "{{ servers | community.general.json_query(jmespath_name) | replace('[','' ) | replace(']','' )}}"
os_server_ip: "{{servers | community.general.json_query(jmespath_ip) | replace('[','' ) | replace(']','' )}}"
os_server_host: "{{servers | community.general.json_query(jmespath_host) | replace('[','' ) | replace(']','' ) | replace('9119MME_','Server-9119-MME-SN' )}}"
vars:
jmespath_name: "servers[?name == '{{ server_name }}'].instance_name | [0]"
jmespath_ip: "servers[?name == '{{ server_name }}'].access_ipv4 | [0]"
jmespath_host: "servers[?name == '{{ server_name }}'].compute_host | [0]"
register: lpar_details
- name: startup the lpar
include_tasks: "stop_start_lpar.yml"
vars:
stop_start: 'poweron'
The servers dictionary is created in the below play, and is searchable.
- name: Retrieve list of all servers in this project
os_server_info:
auth:
auth_url: '{{ os_auth_url }}'
username: '{{ os_username }}'
password: '{{ os_password }}'
project_name: '{{ os_project_name }}'
user_domain_name: '{{ os_user_domain_name }}'
project_domain_name: "{{ os_project_domain_name }}"
validate_certs: false
register: servers
The inputs to the role are passed from Ansible as variables defined in a groups_vars directory, or any other preferred input method.
Create the custom Execution Environment Image files
Log into podman for your ansible automation hub.
Create a directory to work in, e.g. mkdir ee-os-sdk
Create the execution-environment.yml for ansible-builder.
---
version: 1
build_arg_defaults:
EE_BASE_IMAGE: 'ansiblehub.xx.xx.xx/ee-minimal-rhel8:latest'
EE_BUILDER_IMAGE: 'ansiblehub.xx.xx.xx/ansible-builder-rhel8:latest'
ansible_config: ansible.cfg
dependencies:
galaxy: requirements.yml
python: requirements.txt
system: bindep.txt
Create the requirements.yml for the collections dependencies.
Note that we have added the custom modern.powervc_ocp collection, and the existing ibm.power_hmc collection.
# cat requirements.yml
---
collections:
- name: openstack.cloud
- name: modern.powervc_ocp ß custom collection
- name: ansible.posix
- name: ansible.utils
- name: ansible.netcommon
- name: community.general
- name: ibm.power_hmc
Create the requirements.txt for Python.
# cat requirements.txt
openstackclient
openstacksdk
Create the bindeps.txt for the operating system.
# cat bindeps.txt
libxml2-devel
libxslt-devel
python3-devel
gcc
python3-lxml
See this support note for information on how to copy the redhat subscription details into the docker container image.
https://access.redhat.com/solutions/1443553
Ensure the builder image is minimal to make the image smaller and to ensure that microdnf works as expected.
ARG EE_BASE_IMAGE="ansiblehub. xx.xx.xx.xx /ee-minimal-rhel8:latest"
If you need to, add an ignore certs for a self-certified ansible hub, do the following.
RUN ansible-galaxy role install --ignore-certs $ANSIBLE_GALAXY_CLI_ROLE_OPTS -r requirements.yml --roles-path "/usr/share/ansible/roles"
RUN ANSIBLE_GALAXY_DISABLE_GPG_VERIFY=1 ansible-galaxy collection install --ignore-certs $ANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml --collections-path "/usr/share/ansible/collections"
Build the Ansible Executable Environment Image (EEI).
Now use podman to build the EEI. The below extract gives you the start of this.
I like to use the following format which makes its quick to change to a push when done.
[ansi01@controller ee-os-sdk]$ podman build -f context/Containerfile -t ansiblehub.xx.xx.xx.xx/ee-modern-openstacksdk:19 context
When done, login to your Ansible Automation Hub with podman.
[ansi01@controller ee-os-sdk]$ podman login ansiblehub.xx.xx.xx.xx
Username : admin
Password:
Login Succeeded!
Push the created EEI to your Ansible hub.
[ansi01@controller ee-os-sdk]$ podman push ansiblehub.xx.xx.xx/ee-openstacksdk:19
We can use this EEI to run any playbooks that use Ansible OpenStack and IBM HMC modules, as well as the custom roles in the collection for organization targeted tasks.
Populate Environment Variables
The EEI has all the collections we need to run a play. All we need are the playbooks we created earlier and an ansible.cfg.
The plays include a disk attachment named by the environment variable "new_vol".
Create a disk in PowerVC with any name and add that name to group_vars/all/infra.yml.
We shall use the name modern here.
Update the play variables in group_vars/all/infra.yml.
The site.yml only creates 1 VM named vm01 and 3 worker nodes, and uses the rhel image from PowerVC.
group_vars/all/infra.yml
---
# HMC access
hmc_username: 'xxxxx'
hmc_password: 'xxxxxx'
hmc_hostname: 'xxxxxxxx'
# VMs to Create
vm01_hostname: "vm01"
workers_list:
- worker_name: "worker-0"
- worker_name: "worker-1"
- worker_name: "worker-2"
# name of the disk created in PowerVC to be attached to vm01
new_vol: "modern"
~
~
The other environment file that needs to be populated is in group_vars/all/os_powervc.yml
Enter the Cloud information for the PowerVC project you will be using here.
Also, copy the PowerVC certificate /etc/pki/tls/certs/powervc.crt to the local workstation.
Populate the UUID of PowerVC images, networks and flavors.
You will have to use the openstack command,
$ openstack network list
To get the UUID of the network
group_vars/all/os_powervc.yml
Because we are using ansible to provide the cloud details, we do not need a cloud.yml.
---
os_identity_api_version: 3
os_auth_url: "https://xx.xx.xx.xx:5000/v3"
os_cert: "/etc/pki/tls/certs/powervc.crt"
os_region_name: "RegionOne"
os_project_domain_name: "Default"
os_project_name: 'modern'
os_tenant_name: "{{ os_project_name }}"
os_user_domain_name: "Default"
os_username: 'xxxxx'
os_password: 'xxxxx'
os_compute_api_version: 2.46
os_network_api_version: 3
os_image_api_version: 2
os_volume_api_version: 3
powervc_rhel_image: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
powervc_coreos_image: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
worker_flavor: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
master_flavor: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
auto_ip: 'yes'
powervc_net_id: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
Use the EEI to run an Ansible Play
Run the site.yml previously created to build vm01 and the worker nodes.
[ansi01@controller modern]$ ansible-navigator --eei ansiblehub.xx.xx.xx.xx/ee-modern-openstacksdk:1.19 run site.yml -m stdout
The VM is created in PowerVC
Normally , PowerVC allows you to create more than one VM with the same name, and adds an OpenStack UUID on the end. Which is not what you want if you need to have a standard method for creating unique names that match the hostname.
If you run this play a second time, the VMs are not created twice, because the plays check if they exist in openstack before creating.
One command to run them all.
[ansi01@controller modern]$ ansible-navigator --eei ansiblehub.xx.xx.xx.xx/ee-modern-openstacksdk:1.19 run site.yml -m stdout
PLAY [Deploy some VMs] **********************************************************************************************************************************************
TASK [modern.powervc_ocp.powervcvm : Create a new VM in PowerVC] ****************************************************************************************************
changed: [localhost]
TASK [modern.powervc_ocp.powervcvm : Retrieve list of all servers in this project] **********************************************************************************
ok: [localhost]
TASK [modern.powervc_ocp.powervcvm : debug] *************************************************************************************************************************
ok: [localhost] => {
"msg": "vm name is vm01"
}
TASK [modern.powervc_ocp.powervcvm : Get the created server] ********************************************************************************************************
ok: [localhost] => {
"msg": [
{
"access_ipv4": "xx.xx.xx.xx",
"id": "29c2bfd0-f36c-4f3d-9785-bbb93e16339a",
"name": "vm01",
"status": "ACTIVE"
}
]
}
TASK [modern.powervc_ocp.powervcvm : get the server name and IP Fact] ***********************************************************************************************
ok: [localhost]
TASK [modern.powervc_ocp.powervcvm : update eei /etc/hosts for xx.xx.xx.xx vm01] *********************************************************************************
changed: [localhost]
TASK [modern.powervc_ocp.powervcvm : Pause for 2 minutes to allow the interface to be up] ***************************************************************************
Pausing for 120 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
There is a lot of information about the VM that can be accessed in JSON format from the PowerVC cloud.
You can use this information to automate other tasks in your environment.
Below is an example of output retrieving the MAC address.
TASK [modern.powervc_ocp.ocp_nodes_create : debug] ******************************************************************************************************************
ok: [localhost] => {
"vmout": {
"changed": true,
"failed": false,
"server": {
"access_ipv4": "",
"access_ipv6": "",
"addresses": {
"VLAN-130": [
{
"OS-EXT-IPS-MAC:mac_addr": "fa:26:12:23:65:20",
"OS-EXT-IPS:type": "fixed",
"addr": "xx.xx.xx.xx",
"version": 4
}
]
},
"admin_password": null,
"attached_volumes": [
{
"attachment_id": null,
"bdm_id": null,
"delete_on_termination": true,
"device": null,
"id": "6fa4131f-8fa0-47db-9653-4353873aa312",
"location": null,
"name": null,
"tag": null,
"volume_id": null
}
In my next blog I will demonstrate how I use a custom EEI and collection to create a User Provisioned Infrastructure (UPI) OpenShift Cluster.
The EEI is an immutable image that can be run more than once to create multiple OCP UPI clusters on the same host group of IBM Power Managed systems managed by PowerVC. The user just updates the OCP values in group_vars/… to define a new cluster domain and powervc namespace.
Thanks for reading and please share your comments.
Ian
------------------------------
Ian Bellinfantie
------------------------------