Dynamic Partition Manager (DPM) has simplified partition lifecycle management and IO configuration since IBM z13 and LinuxONE systems. Most of the IBM Z specific skills that were required to configure IO and create partitions were no longer mandatory for managing an IBM Z or LinuxONE system. A system administrator with low or no IBM Z specific skills can follow the intuitive wizards and screens in the Hardware Management Console (HMC) and create partitions with their IO requirements in minutes. This experience is supported in the HMC UI as well as on the HMC Webservices API so that it could be integrated into the automation workflows the clients may have setup.
Since z14 and LinuxONE II, DPM also simplified storage management by automating decisions and tasks for the system administrator. Storage resources required for partitions are grouped into a storage group, to which, DPM assigns a set of world-wide port numbers (WWPNs). These WWPNs are included in the zone configuration and lun masking configuration in the Storage Area Network, enabling them to access storage disks configured in the storage system. DPM provides guaranteed storage availability for partitions even before the operating system is started. Using the LUN discovery commands supported in the FCP protocol, DPM queries the SAN configuration to validate that the WWPNs are uniformly configured to see the same set of LUNs. Once validated, the storage groups are ready to be attached to one or more partitions. On attaching a storage group to a partition, the WWPNs of the storage group are used to create a Virtual Host Bus Adapter (vHBA) on one of the FCP Adapters that DPM had earlier validated the storage configuration on.
Automatic Adapter selection
The selection of the FCP adapter is fully automatic and DPM considers redundancy and performance characteristics of the adapters. DPM makes a best-case attempt to select adapters across the domains and the IO cages and those that have the least utilization at the time, to create the vHBAs for the WWPNs of the storage group. In order to do this, DPM should be able to assign any WWPN of the storage group to any adapter that has been previously validated. If the adapter is connected to a storage switch that does not have the WWPNs zoned in them, the DPM cannot choose those adapters for the vHBAs. Thus, It is always required to symmetrically zone all the wwpns of the storage group in all the switches that are to be used for storage connectivity.
Also, when the FCP adapters in a system are connected to more than one switch, DPM requires that the wwpns of the storage group are included in the zoning configuration of atleast two switches. This is required to choose adapters that connect to different switches and provide redundancy at the switch level as well.
A sample configuration
Let us take an example configuration where a storage group is created with max-partitions as 2 and connectivity as 4. Consider that W1, W2, W3, W4, W5, W6, W7 and W8 are the WWPNs that DPM allocated for this storage group. Let us also consider that the system has 8 FCP adapters (A1, A2, A3, A4, A5, A6, A7 & A8), 4 of them (A1…A4) connecting to SAN switch ‘SW1’ and the remaining 4 adapters (A5…A8) connecting to a different SAN switch ‘SW2’. Also consider that both the switches are connected to a storage system, that will provide the storage disks for the storage group in this example. DPM expects all the WWPNs W1…W8 to be included in the zoning configurations of both the switches SW1 and SW2. The storage system should also include lun masking configurations for all the WWPNs W1…W8 and map them to storage disks as per the storage group request.
DPM will validate that all the 8 WWPNs (W1…W8) are configured in both the switches and seeing the same set of LUNs. When the storage group is attached to a partition, DPM can pick any 4 WWPNs to create the vHBAs for the partition and will try to distribute them equally across the switches. The best fit would be to select 2 adapters connecting to switch SW1 and 2 adapters connecting to the other switch SW2. So, the partition will have 4 vHBAs that are redundantly distributed across the two switches to access the storage disks. The number of paths the operating system sees depends on the number of target ports configured for the wwpns in each switch. For redundancy, it is ideal to have two target ports configured for the WWPNs in each switch. This would mean that the operating system will have 8 paths to the storage disks.
Conclusion
While DPM can offer benefits of redundancy and performance, it can also accommodate any custom planning on the client side. If the clients have planned their partitions to use certain adapters, other than what DPM chose for them, it is possible to use the “Change Adapters” option, to replace adapters as required. The option can be accessed from Partition details panel or from the Storage Group details panel. The Storage management offered by DPM in IBM Z and LinuxONE systems is a simple, intuitive and modern user experience that helps system administrators, by automating the platform specific complexities in IO configuration.