z/OS Container Extensions (zCX) - Group home

Getting started with zCX maintenance version 1.26.0

  

Introduction

zCX appliance maintenance version 1.26.0 introduces new STOR type VSAM Linear datasets (LDSes) to store zCX instance user data. New provisioning of zCX instances at appliance version 1.26.0 using zCX z/OSMF workflow 1.2.0 uses the STOR-type VSAM LDSes as the zCX instance user data disks, and no longer allocates or uses DATA-type VSAM LDSes.

  • The DATA type disks had the following VSAM LDS naming convention: <ZCX_HLQ>.<ZCX_INSTAME>.DATA*

  • The STOR type disks have the following VSAM LDS naming convention: <ZCX_HLQ>.<ZCX_INSTNAME>.STOR*

Important Notes:

    • It is recommended that you check your site policies to see if additional action is required to handle the new STOR-type VSAM LDS naming convention.

    • It is recommended to use the updated sample properties file provided with the maintenance to pickup the new zCX z/OSMF variables.

Upgrading existing zCX instances to maintenance version 1.26.0 using zCX z/OSMF workflow version 1.2.0 will allocate additional VSAM LDSes of STOR-type to contain converted zCX instance user data disks. The user data conversion is required and must be performed for future zCX maintenance compatibility. For each existing DATA-type VSAM LDS, the zCX z/OSMF upgrade workflow version 1.2.0 will allocate the same number and size of STOR-type VSAM LDS. Work with your storage administrator to allocate enough storage space for the additional allocation of STOR-type VSAM LDS during z/OSMF upgrade workflow execution.

Note: As a general guidance, each zCX appliance instance requires additional storage space equivalent to the current user data file system allocation size (in MB) to hold the converted user data file system. For example, if a current zCX instance’s DATA-type VSAM LDS allocation size is 20GB, then you need an additional 20GB of storage volumes for the STOR-type VSAM LDS for upgrading that specific zCX instance.

zCX z/OSMF upgrade workflow can be performed while the zCX instance is up without impacting the current appliance. The automatic user data file system conversion happens when you restart the zCX appliance. A restart of the zCX appliance is required. Schedule the window to recycle the zCX address space according to your operational requirements. After restarting the zCX instance, the zCX appliance performs the user data conversion automatically. The zCX restart boot process converts the zCX user data from DATA-type VSAM LDS to STOR-type VSAM LDS.

At a high level, the following diagram describes the process, appliance and workflow versions required:

Starting from zCX appliance version 1.26.0, STOR-type VSAM LDSes are used as user data disks replacing the existing DATA-type VSAM LDS user data disks.

You should consider backing up all VSAM data sets associated with the zCX instance before performing the upgrade. You will need to stop your zCX instance before performing the backup.

Software Requirements

Apply the following maintenance: PTFs for APAR OA65991, APAR OA65992, APAR OA65993, APAR OA65994, APAR OA65756 and APAR OA65770

All existing zCX instances must be upgraded using zCX z/OSMF workflow version 1.2.0 and restarted after installing these APARs to perform the automatic user data conversion.

Resource Requirements

Additional storage resources for allocation of STOR type VSAM LDS: For each existing DATA-type VSAM LDS, the zCX z/OSMF upgrade workflow version 1.2.0 allocates the same number and size of STOR-type VSAM LDS. (Required)

Additional resource for performing back up of VSAM datasets for zCX instance. (Optional)

Upgrading an existing zCX instance

The zCX z/OSMF upgrade workflow version 1.2.0 upgrades your existing zCX appliance to version 1.26.0. It allocates new STOR-type VSAM LDS (STOR disks) based on existing DATA-type VSAM LDS (DATA disks). The same number and size of STOR disks are allocated for each DATA disk.

After completing step 3.2, a summary report of DATA disk allocations is generated and can be viewed from the STDOUT tab, from the Status tab, for this step. This can be used as a reference to determine whether the user data storage configuration values provided can accommodate the new STOR disk allocations.

The upgrade workflow supports up to 120 DATA disks. If your zCX instance is running with more than 120 DATA disks, contact IBM for assistance with the upgrade.

Create the workflow Instance

From the Workflows plugin in z/OSMF, create the zCX z/OSMF upgrade workflow. By default, the workflows are installed at /usr/lpp/zcx_zos.

The workflow version should be at version 1.2.0. For simplicity, make sure the “Assign all steps to owner user ID” is checked. This will allow you to perform every step in the workflow. Then click Finish.

Performing the upgrade

For steps 1, 2, 3, 3.1, and 3.2 you must click into the step and perform each of these steps manually. The remainder of the steps, steps 3.3 to 18, are automated steps.

Begin by performing step 1. Provide the zCX instance name, zCX instance registry directory, and install directory for your zCX instance. The install directory should point to where the workflow APAR OA65991 was applied. By default this is /usr/lpp/zcx_zos.

Next perform step 2 and confirm the current and target ROOT binary for the upgrade. The target ROOT binary should be version 1.26.0. In the example below, the current zCX appliance version 1.25.1 is being upgraded to version 1.26.0. Click Finish to complete the step.

STOR disk allocations

The upgrade workflow allocates the same number and size of STOR disks for each DATA disk. Additional STOR disk allocations cannot be requested during an upgrade.

Note: If additional STOR disks are needed, use the add data disks workflow to add the additional STOR disks after the upgrade workflow is performed.

Step 3 performs the STOR disk allocations. It is made up of 7 substeps that calculate, allocate, and report the total allocations.

Steps 3.1 and 3.2 must be performed manually.

Step 3.1 Calculates the required STOR disks. The calculation is based on the total number of DATA disks. For each DATA disk, a STOR disk is allocated with the same size.

Step 3.2 prompts for the user data storage configuration values that will be used to allocate the STOR disks. Work with your storage administrator to ensure that the values provided can accommodate the additional STOR disk allocations. You must confirm the values by checking the “User data value validation completed” checkbox before completing the step.

After completing step 3.2, viewing the STDOUT tab, from the Status tab, of step 3.2 shows a summary of the current DATA disks. This can be used as a reference to determine the additional STOR disks allocations and the total size needed.

Perform the remainder of the workflow automatically

Steps 3.3 and the remainder of the workflow steps (step 3.3 to step 18) are automated steps. They will perform the allocation of STOR disks and the rest of the upgrade.

To perform the steps automatically, select step 3.3 by checking the check box, then from Actions, selecting Perform the step.

Performing step 3.3 should show the dialog below. Choose OK to begin the automation.

From the Workflow Details, the Status should show Automation in Progress. Refresh the page until the automation completes.

If the automation completed successfully, it should show step 17 as Complete and step 18 as Ready.

Perform step 18 to show the command to manually stop the zCX instance. Use the command to stop the zCX instance if it is currently started.

Perform step 19 to show the command to start the zCX instance. Use the command to start your zCX instance.

After starting the zCX instance, the SDSF OUTPUT for the zCX instance should show 2 messages indicating that the conversion of zCX user data file system has started followed by a message indicating that it has completed.

After the zCX has started, SSH into the CLI and validate your existing deployed zCX Docker containers, Docker images, Docker volumes, and home directory contents.

STOR disk allocation errors

On STOR disk allocation errors, perform step 3.6 and perform the allocation steps again.

Step 3 performs the STOR disk allocations. It is made up of 7 substeps that will calculate, allocate, and report the total allocations.

Steps 3.1 to 3.5 are discussed above. They perform the calculation and allocation of STOR disks.

Step 3.6 is marked “Skipped” and is intended to be manually performed only when the allocation steps (steps 3.3 to 3.5) fail. This step will deallocate ALL existing STOR disks to allow steps 3.2 to 3.5 to be manually performed.

Re-performing steps 3.2 allows you to update your user data storage configuration values. The new values are then used when the allocation steps, steps 3.3 to 3.5, are re-performed.

For steps 3.3 to 3.5, you only need to re-perform the steps that are not marked Skipped. The allocations are done in 3 steps. Each step performs a set of allocations. An allocation step is marked as Skipped if it doesn’t need to be run, for example, if the previous step was able to perform all the needed allocations.

In the example below, step 3.3 was able to perform all the allocations, so steps 3.4 and 3.5 are marked Skipped.

After re-performing (up to the failing step), perform the next step automatically. This drives the automation up to step 18.

For reference, after completing step 3.7, viewing the STDOUT from the Status tab of step 3.7 shows a summary of the DATA and STOR disks.

Acknowledgements

Bill Keller, IBM, Content Designer-Systems. bkeller@us.ibm.com

#IBMz/OS #IBMZ #zCX #Containers