PowerVM Version 2.2.4 is a major upgrade that includes many new enhancements. One of the major areas of focus has been improvements to Live Partition Mobility (LPM). LPM is at the heart of any cloud solution and provides higher availability for planned outages.
Better NPIV Storage Validation
PowerVM Version 2.2.4 allows you to select the level of NPIV storage validation that best fits your environment. By default, the VIOS server at PowerVM Version 2.2.4 will continue to do LPM validation at the NPIV port level. This is appropriate if you are confident that the SAN storage is correctly zoned. If you are setting up a new LPM environment or want to assure yourself that SAN zoning errors are caught prior to starting an LPM operation, you will want to enable the new disk plus port level validation. With disk level validation enabled, the VIOS will validate that individual disk LUNs assigned to the partition are usable on the target system. This additional checking will increase the amount of time required to perform LPM validation but has the advantage of surfacing zoning issues that could impact VM migration.
To take advantage of NPIV validation at the disk level, both the source and target VIOS partitions must be at version 2.2.4. To enable disk level validation, the src_lun_val attribute in the LPM pseudo-device (vioslpm0) of the VIOS that is hosting the NPIV storage on the source system must be set to a value of on and the dest_lun_val attribute on the VIOS partitions that are hosting NPIV storage on the destination system cannot be set to lpm_off or off.
Improved resiliency
Most customers utilize a dual VIOS configuration to ensure the highest level of availability of client partition. In past versions of PowerVM, LPM operations would only succeed if both VIOS partitions in a dual VIOS configuration were functional. Starting in PowerVM version 2.2.4, LPM operations can succeed even if one of the two VIOS partitions in a dual VIOS configuration has failed. This provides better resiliency against both hardware and software failures.
To a large degree, the responsiveness of applications during LPM operations is dependent on the network being used to transfer the LPM data. Some applications have built in heart beat mechanisms to SAN and/or network communications and these applications are sensitive to small changes in response times. IBM recommends that customers use 10Gb or better connections for application that utilize heart beat mechanisms. IBM recognizes that 10Gb network is not an option for all customers so we've added a concurrency option setting to help with the responsiveness on slower networks. If you have applications that are sensitive to application timeouts and are running on a slow network, you should specify concurrency level 5 for migration. Concurrency level 5 reduces the latency of network requests by reducing the peak bandwidth which can alleviate application heart beat issues. The concurrency level attribute, concurrency_lvl can be found in the LPM pseudo-device (vioslpm0) of the VIOS partitions designated as the mover service partition (MSP).
Improved Performance
Performance of Live Partition Mobility has been a focus area for IBM over the past few releases of PowerVM. This trend has continued in version 2.2.4 with scalability improvements to support higher speed connections. Prior to version 2.2.4, a single LPM operation was only able to saturate a 10Gb network connection. Improvements have been made in the VIOS and PowerVM hypervisor in version 2.2.4 to support network bandwidth up to 35Gb. The connection can be a single connection or redundant connection built using link aggregation. These higher speed connections for a single LPM operation not only reduce the time to migrate a partition but can also address application issues that are triggered by slow speed lines.
In order to drive a high speed line at its rated speed, you must allocate additional VIOS resources to the LPM operation. You can control the amount of resources via the concurrency level setting for the VIOS and/or the migrlpar HMC command. The concurrency level specified on the migrlpar HMC command overrides, on an individual request basis, the default concurrency level that is specified in the VIOS. The highest level of concurrency (best performance and highest amount of resource consumption) is concurrency level 1. This is recommended if you want to drive LPM operation at greater than 30Gb network speeds. Concurrency level 4 is the default value and is recommended for line speeds up to 10Gb.
To take advantage of these performance improvements, both the source and target systems must be at PowerVM version 2.2.4.
Virtual Switch
Starting in PowerVM 2.2.4 the HMC allows the selection of the virtual switch name on the target system. Prior to PowerVM 2.2.4, there was no option to override the virtual switch name so you were required to have the same named virtual switch on both the source and target system.
PowerVM 2.2.4 Components
PowerVM 2.2.4 is comprised of:
- VIOS Version 2.2.4
- System Firmware Release 840
- HMC Release 8 Version 8.4.0
Contacting the PowerVM Team
Have questions for the PowerVM team or want to learn more? Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions
#PowerVM#powervmblog#powervmlpm