zPET - IBM Z and z/OS Platform Evaluation and Test - Group home

Automated Provisioning of z/OS in the IBM Cloud



z/OS Platform Evaluation and Test (zPET) runs customer-like workloads in a Parallel Sysplex environment to perform the final verification of new IBM Z hardware and software. For more information about zPET, please check out our community.



IBM has announced IBM Wazi as-a-Service which brings z/OS to the IBM Cloud. This offers a lot of new, exciting capabilities around development and test for z/OS. In addition to exploring the capabilities of z/OS in the IBM Cloud to augment our existing environment and processes, our team also contributed to the larger test effort of testing z/OS in the IBM Cloud. In this article, we will share how we built new automation to do this.



Before we could start using the IBM Wazi as-a-Service z/OS Virtual Server Instance (VSI), we needed to provision one… and before we could do that, we needed to configure resources in the IBM Cloud. At the start, this was a manual process for us. We would create a VPC, SSH keys, a subnet and floating IP, security groups, and some storage volumes, and with those in place we would provision a new VSI from the pre-installed z/OS stock image provided by IBM Wazi as-a-Service. After the VSI was fully provisioned and accessible, we would log in with SSH and TN3270 to perform setup and configuration tasks, such as copying over testcases and test resources. With that complete, we would finally be ready to start running our testcases and performing our test scenarios.

This process worked at the start, but as new versions of the z/OS stock image were made available to us, we kept needing to repeat the whole process. Sometimes testers could share a VSI for their work, while other times, each tester would need an isolated and dedicated environment, furthering the number of times we’d manually go through the provisioning and configuration process. It immediately became apparent that we needed an automated, repeatable process that could get a z/OS VSI into a "test-ready" state for our testers. Since we wanted a reliable and proven automation engine that would handle all parts of this process, we turned to Ansible. Ansible has a very active community, and this becomes clear when you see all the plugins – called Ansible Collections – that are available. We looked to leverage existing collections provided by IBM and the opensource community to automate processes in the Cloud, on z/OS, and between our test tools and repositories.

In this blog, we will talk about how we automated the configuration of IBM Cloud resources and the provisioning of a z/OS VSI. For details on how we automated the configuration of z/OS, please check back shortly for our upcoming blog.

Please note: The snippets and examples showcased here are slightly simplified for readability and consumability. To view our production playbooks, please once again check back shortly for a link to our GitHub repository.


Constructing a Configuration File

Since we plan on creating multiple VSIs in multiple different regions, we needed to abstract the details away from the process. Therefore, we built configuration files with values that can easily be supplied to our Ansible playbook. Here is an example of what one configuration file might look like:

name_prefix: zpet
resource_group: 123456789abcdefghijklmnopqrstuvw
zone: us-south-3
- name: user_a
  email: user_a@ibm.com
  ssh_public_key: 'ssh-rsa ...'
  vpn_vsi_access: True
    - user_a1
    - user_a2
- name: user_b
  email: user_b@ibm.com
  ssh_public_key: 'ssh-rsa ...'
  vpn_vsi_access: True
    - user_b1
    - user_b2
vpn_vsi_image: ibm-ubuntu-20-04-minimal-amd64-2
vpn_vsi_profile: bx2-2x8
zos_vsi_image: ibm-zos-2-4-s390x-dev-test-wazi
zos_vsi_profile: bz2-4x16


Some information about these fields:

  • name_prefix is a string that we prepend to the names of the resources we create. This lets us group and identify these resources easily.
  • resource_group is the ID of the resource group we are allowed to use.
  • zone is the IBM Cloud Multi-Zone Region to operate against.
  • users is a list of objects that define information about our users. For each, we supply a name, email, SSH public key, whether or not they need access to the Linux VPN VSI, and the list of z/OS user IDs they would need (we will get to this in the Automated Configuring of z/OS in the IBM Cloud blog).
  • vpn_vsi_image is name of the image used to provision the Linux instance used for VPN tunneling.
  • vpn_vsi_profile is name of the profile used to provision the Linux instance used for VPN tunneling.
  • zos_vsi_image is the name of the image used to provision the z/OS instance.
  • zos_vsi_profile is name of the profile used to provision the z/OS instance.

There are multiple different ways of connecting to your z/OS VSIs once they are created in the IBM Cloud. The first option, which we do not recommend, is to create a public IP address and attach this to your z/OS VSI. The second option is to use the built-in VPN resource. The third option, which is what we will be using in this example, is creating a Linux VSI, installing VPN software in the VSI, and then using it to tunnel through to your z/OS VSI. It is for this reason that our configuration file has vpn_vsi_image and vpn_vsi_profile fields.

With the configuration file squared away, we can begin creating our Ansible playbooks.


Creating IBM Cloud Resources

Working with resources in the IBM Cloud is easy using the Ansible modules provided in the cloudcollection collection. After setting up a configuration file with some pre-determined values, we were able to create plays and tasks to begin creating our IBM Cloud resources. The first step was creating a VPC:

  - name: Configure VPC
    name: "{{ name_prefix }}-vpc"
    state: available
    resource_group: "{{ resource_group }}"
  register: vpc_create_output

- name: Save VPC as fact
    cacheable: True
      vpc: "{{ vpc_create_output.resource }}"


The ibm_is_vpc module allows us to create a VPC using the information we supplied in our configuration file. Once the VPC is created, we save its details as a fact called vpc, which we can then use when creating our subnet:

  - name: Configure VPC Subnet
    name: "{{ name_prefix }}-subnet"
    state: available
    vpc: "{{ vpc.id }}"
    total_ipv4_address_count: 256
    zone: "{{ zone }}"
    resource_group: "{{ resource_group }}"
  register: subnet_create_output

- name: Save VPC Subnet as fact
    cacheable: True
      subnet: "{{ subnet_create_output.resource }}"


After creating the subnet and saving the details as a fact called subnet, we can move on to creating SSH keys. To do this, we start by creating an empty list object. We then loop over the users list that we defined in our configuration file, and for each user, we extract the name and public key. We provide these values to the ibm_is_ssh_key module to create an SSH Key Cloud resource:

  - name: Create SSH Key ID list
    ssh_key_ids: []

- name: Configure user SSH Keys
    name: "{{ item.name }}-ssh-key"
    public_key: "{{ item.ssh_public_key }}"
    resource_group: "{{ resource_group }}"
  loop: "{{ users }}"
    register: ssh_key_create_output


With the SSH keys created in the IBM Cloud, we can loop over the response and build a new list of keys, filtering out the users that do not need access to the Linux VSI server where the VPN software will run.

  - name: Add user SSH keys to SSH Key ID list
    ssh_key_ids: "{{ ssh_key_ids + [ item.resource.id ] }}"
  loop: "{{ ssh_key_create_output.results }}"
    when: "{{ item.item.vpn_vsi_access }}"


Since our z/OS VSI won’t be on a public network, we will create a Linux VSI and install VPN software. This will allow us to connect to a VPN in order to access our z/OS VSI. Since we have just identified the SSH keys of the users that need access to this Linux VSI, we can use that variable along with some values defined in our configuration file to create the Linux VSI.

The first step is to determine the ID of the image to use based on the provided name. We accomplish this with the ibm_is_images_info module. We pull all images and find the ID of the image whose name matches the one we need.

  - name: Retrieve image list
  register: images_list

- name: Set image name/id dictionary fact
    cacheable: True
    image_dict: "{{ images_list.resource.images |
                      items2dict(key_name='name', value_name='id') }}"


After getting the image ID, we can then provision the Linux VSI. We save the details of the newly created VSI as a fact called linux_vsi, which is used in the next step.

  - name: Create VPN VSI
    name: "{{ name_prefix }}-vpn-vsi"
    state: available
    vpc: "{{ vpc.id }}"
    profile: "{{ vpn_vsi_profile }}"
    image: "{{ image_dict[vpn_vsi_image] }}"
    keys: "{{ ssh_key_ids }}"
      - subnet: "{{ subnet.id }}"
    zone: "{{ zone }}"
    resource_group: "{{ resource_group }}"
  register: vpn_vsi_create_output

- name: Save VSI as fact
    cacheable: True
      linux_vsi: "{{ vpn_vsi_create_output.resource }}"


Once the Linux VSI is created, we need to attach a floating IP address to it. We can create this floating IP with the ibm_is_floating_ip module, saving the IP address as a fact called fip.

  - name: Configure Floating IP Address
    name: "{{ name_prefix }}-fip"
    state: available
    target: "{{ linux_vsi.primary_network_interface[0]['id'] }}"
    resource_group: "{{ resource_group }}"
  register: fip_create_output

- name: Save Floating IP as fact
    cacheable: True
      fip: "{{ fip_create_output.resource }}"


Finally, our last remaining step is to configure security group rules in order to limit access to the VSI. For this, we use the ibm_is_security_group_rule module, allowing access only on ports 22 and 1194.

  - name: Configure Security Group Rule to open SSH on the VSI
    state: available
    group: "{{ vpc.default_security_group }}"
    direction: inbound
      - port_max: 22
        port_min: 22

- name: Configure Security Group Rule to open VPN port on the VSI
    state: available
    group: "{{ vpc.default_security_group }}"
    direction: inbound
      - port_max: 1194
          port_min: 1194


At this point, we have created all of the resources in the IBM Cloud, including a Linux VSI that will run VPN software to connect to our z/OS VSI. Since there are multiple VPN options and even more ways of configuring them, we will not cover that section in this blog. However, once that is complete, we can begin provisioning z/OS VSIs.


Provisioning a z/OS Virtual Server Instance

With our IBM Cloud resources created and configured, all that is left is to provision our z/OS VSI and wait for it to become reachable. We will once again make use of the ibm_is_instance module. This module is supplied with values from our configuration file as well as the resources previously created.

  - name: Provision z/OS VSI
    name: "{{ name_prefix }}-{{ zos_vsi_name }}"
    state: available
      - name: "{{ name_prefix }}-{{ zos_vsi_name }}-boot-volume"
    vpc: "{{ vpc.id }}"
    profile: "{{ zos_vsi_profile }}"
    image: "{{ image_dict[zos_vsi_image] }}"
    keys: "{{ ssh_key_ids }}"
      - subnet: "{{ subnet.id }}"
    zone: "{{ zone }}"
    resource_group: "{{ resource_group }}"
  register: zos_vsi_create_output

- name: Save VSI as fact
    cacheable: True
      zos_vsi: "{{ zos_vsi_create_output.resource }}" 


Once the Virtual Server Instance is created in the IBM Cloud, we just need to wait for it to become accessible over SSH, after which we know the system is fully up and ready for use.

  - name: Wait for VSI to become reachable over SSH
    host: "{{ zos_vsi.primary_network_interface[0].primary_ipv4_address }}"
    port: 22
    delay: 300
    sleep: 30
    timeout: 2700
    delegate_to: localhost


Deprovisioning the z/OS VSI

After we have used the z/OS VSI and no longer need it, we can deprovision it to free up resources. We drive this via a separate Ansible playbook, and the name of the VSI we are deprovisioning is supplied to this playbook via --extra-vars argument. The first step of this process is to retrieve the VSI by name from the IBM Cloud, saving this resource as a fact called zos_vsi.

  - name: Retrieve z/OS VSI
    name: "{{ vsi_name }}"
  register: vsi_info_output

- name: Save z/OS VSI as fact
    cacheable: True
      zos_vsi: "{{ vsi_info_output.resource }}"


In our case, in addition to deprovisioning the z/OS VSI, we also want to delete any volumes that we may have attached to this VSI. To do this, we use the ibm_is_volume module.

  - name: Destroy all volumes on z/OS VSI
    id: "{{ item }}"
    state: absent
    loop: "{{ zos_vsi.volumes }}"


To deprovision the VSI, we use the same module that we used to provision it: ibm_is_instance.

  - name: Remove z/OS VSI
      state: absent
      id: "{{ zos_vsi.id }}"
      image: "{{ zos_vsi.image }}"
      keys: []
        - subnet: "{{ zos_vsi.primary_network_interface[0].subnet }}"
      profile: "{{ zos_vsi.profile }}"
      vpc: "{{ zos_vsi.vpc }}"
     zone: "{{ zos_vsi.zone }}"



We have discussed the ways that we use Ansible automation to configure IBM Cloud resources, provision a z/OS VSI, and then deprovision that VSI within minutes. This automation has become critical to our test process, allowing us to easily create a z/OS VSI we can use for testing, after which the resources can be easily deprovisioned, and we aim to use this solution for any testing requirements that may come in the future.



Wazi as a Service: https://www.ibm.com/cloud/wazi-as-a-service

Technical Documentation: https://www.ibm.com/docs/en/wazi-aas/1.0.0

Getting started with Wazi as a Service: https://developer.ibm.com/blogs/get-started-with-ibm-wazi-as-a-service/

Red Hat Ansible Certified Content for IBM Z Content Solutions: https://www.ibm.com/support/z-content-solutions/ansible/

IBM z/OS Ansible Collections: https://ibm.biz/BdfrAu



Michael Cohoon (mtcohoon@us.ibm.com)

Torin Reilly (treilly@us.ibm.com)