Ansible for IBM Z

Ansible for IBM Z

Ansible for IBM Z

Facilitate communication, user interaction and feedback for Red Hat Ansible Certified Content for IBM Z

 View Only

IBM z/OS Core collection v1.13.0 is out now! Providing the ability to resize zFS aggregates with a new module

By Oscar Fernando Flores Garcia posted Mon March 31, 2025 04:53 PM

  

Hi All! Almost two months ago I was announcing the release of IBM z/OS Core collection v1.13.0-beta.1, and now it's time to announce its general availability. Now, our new module zos_zfs_resize is available along with 7 enhancements and 14 bugfixes. During this beta period 3 important bugfixes were ported from release v1.12.1 into v1.13.0 which I will speak more of in the bugfixes section.

New job submit capabilities

Two new capabilities come to zos_job_submit to make it more flexible when executing long running jobs or many jobs at the same time. First, the deploy and forget feature, now when you set wait_time_s equal to 0, the module will only submit the job and not wait to pick additional details like job name, ret_code and so on. This allows 

    - name: Submit and forget a job.
      ibm.ibm_zos_core.zos_job_submit:
        src: "job.jcl"
        location: local
        wait_time_s: 0
      register: job_submit

This allows us to just execute a job and check on it later, either using zos_job_query or zos_job_output depending on how much info about this job you need.

The next feature, is adding support to ansible keyword async to zos_job_submit. By default, ansible runs tasks sequentially, holding the connection to the remote node open until the action is completed, this can pose some challenges when waiting for long running tasks. To solve this, user can use both async and poll keywords to control the asynchronous behavior when running tasks. You can read more about asynchronous actions in the Ansible docs.

This is an example on how to use async with zos_job_submit:

    - name: Submit multiple jobs asynchronously.
      ibm.ibm_zos_core.zos_job_submit:
        src: "{{ item }}.jcl"
        location: local
      loop:
        - job_1
        - job_2
        - job_3
      async: 45
      poll: 0
      register: job_submit

    - name: Query async tasks in a loop.
      async_status:
        jid: "{{ item.ansible_job_id }}"
      register: job_outputs
      until: job_outputs.finished
      retries: 10
      # Use loop if you don't care about registering the output from each job.
      # loop: "{{ job_submit.results }}"
      # Use with_items if you do care.
      with_items: "{{ job_submit.results }}"

What is happening in this piece of code? First we use async value 45 and poll as 0. This means that Ansible will run the task and immediately go on to the next task without polling the results, the task will continue to run in the background until it completes, fails or times out at 45 seconds.

Then we fetch the result of each job using async_status this way with just one task in our playbook we can run 3 jobs concurrently using a loop. We have tested this with up to a 100 jobs being executed concurrently, we hope that this takes your job automation to a whole new level! 

Enhancements

  • Enable or disable auto escaping in template parameters: For those modules that allow Jinja templating, we've had added a new option inside template_parameters that allows users to disable autoescaping of common XML/HTML characters when working with Jinja templates in zos_job_submit, zos_copy and zos_script.
  • Async support for zos_job_submit: Now, you can use the async keyword in ansible to issue a job submission and fetch the result of the task later in your playbook. Additionally, when wait_time_s is 0  the module will submit the job and will not wait to get the job details or content, returning only the job id.
  • Select your custom rc to fail in zos_mvs_raw: Previously, is a job returned a non-zero return code zos_mvs_raw module would fail, now, users have the option to tolerate up to a custom max_rc depending on the job they are executing, giving users more freedom on how this module behaves. 

Bugfixes

  • zos_find: Module would not find a VSAM cluster resource type if it was in use with DISP=OLD, and would not find the DATA and INDEX resources. Fix now finds the VSAM cluster and finds DATA and INDEX resources.
  • zos_mvs_raw:
    • If a program failed with a non-zero return code and verbose was false, the module would succeed. Whereas, if the program failed and verbose was true the module would fail. Fix now has a consistent behavior and fails in both cases.
    • Module would not populate stderr return value. Fix now populates stderr in return values.
    • Module would obfuscate the return code from the program when failing returning 8 instead. Fix now returns the proper return code from the program.
    • Module would return the stderr content in stdout when verbose was true and return code was 0. Fix now does not replace stdout content with stderr.
    • Option tmp_hlq was not being used as HLQ when creating backup data sets. Fix now uses tmp_hlq as HLQ for backup data sets.
  • zos_script: When the user trying to run a remote script had execute permissions but wasn't owner of the file, the module would fail while trying to change permissions on it. Fix now ensures the module first checks if the user can execute the script and only try to change permissions when necessary.

  • zos_fetch: Some relative paths were not accepted as a parameter e.g.files/fetched_file. Change now allows the user to use different types of relative paths as a parameter.

  • zos_copy

    • Improve module zos_copy error handling when the user does not have universal access authority set to UACC(READ) for SAF Profile 'MVS.MCSOPER.ZOAU' and SAF Class OPERCMDS. The module now handles the exception and returns an informative message.

What are the three bugfixes added during the beta period ? All are fixes in zos_copy, mainly related to force_lock option and special characters. This is the list:

  • Previously, if the dataset name included special characters such as $, validation would fail when force_lock was false. This has been changed to allow the use of special characters when force_lock option is false.
  • Previously, if the dataset name included special characters such as $ and asa_text option is true, the module would fail. Fix now allows the use of special characters in the data set name when asa_text option is true.
  • When asa_text was set to true at the same time as force_lock, a copy would fail saying the destination was already in use. Fix now opens destination data sets up with disposition SHR when force_lock and asa_textare set to true.

New module: zos_zfs_resize

With this new release of v1.13.0, we include a new module zos_zfs_resize, and as we mentioned in the previous blog for v1.13.0-beta.1 release, it offers users a simple interface to update their zFS aggregates sizes without the user having to worry about details. This is a task example:

    - name: "Grow ZFS aggregate and get trace back on data set {{ trace_back_data_set }}."
      zos_zfs_resize:
        src: "IMSTEST.STORAGE.ZFS"
        size: "500"
        space_type: "m"
      register: grow_output

If you notice, you don't tell the module to grow or decrease, but rather, the module will ensure that the source zFS is resized to the specified number, otherwise will fail.

But these are not the only options available to the user, I will walk you through all options available.

  • target
    • The fully qualified name of the zFS aggregate to be resized.
  • size
    • The desired size of the data set after resizing is performed. By default this is in KB.
  • space_type
    • Unit of measurement to use when defining the size, with this you can change the size definition to tracks, cylinders, MB and GB.
  • no_auto_increase
    • This one is interesting, it is based on no_ai option from zfsadmshrink and zfsadmgrow, basically, when we queue an aggregate resize and the its size is increased the automatic behavior is that the new size of the aggregate will be increased from the original specified size. With no_auto_increase option set as true, the new total size will not be increased if more space is needed.
  • verbose
    • Displays messages that might be useful for debugging, such as the output and trace from zfsadmgrow and zfsadmshrink commands, also includes other messages from different steps of the process.
  • trace_destination
    • Verbose output might be too large, for that reason the module also provides the option to log the output from verbose directly into an specified USS file, or MVS data set of types sequential and members from PDS/Es.
We are happy to continue to provide enhancements, fixes and new capabilities to the collection in this new release and we want to thank our users for continuing to choose the ibm_zos_core collection for their automation tasks! 
About the Author

Oscar Fernando Flores Garcia is the IBM z/OS Ansible Core Team Lead, with over 8 years of experience. Now leading the design and development of the ansible core product, responsible for many of the product releases.

Image from Amanda Stephens


The Development Team

Without the development team, this would not be possible. I would like to thank the amazing team who work with passion and perseverance on this project. 

  • Rich Parker
  • Ketan Kelkar
  • Oscar Fernando Flores Garcia
  • Ivan Alejandro Moreno Soto
  • Andre Marcel Gutierrez Benitez
  • Demetrios Dimatos
  • Amit Ranjan
  • Rohitash Goyal
  • Surendra Ravella
  • Mayank Mani
  • Yogesh Rana

Resources

IBM Ansible z/OS core on Galaxy
IBM Ansible Core Collection Repository on GitHub
IBM Ansible Core Collection on Automation Hub
Red Hat® Ansible Certified Content for IBM Z documentation

0 comments
13 views

Permalink