PowerVM

 View Only

LPM Improvements in PowerVM 3.1.2

By Pete Heyrman posted Wed February 03, 2021 03:38 PM

  


The PowerVM team is on a continuous journey to provide improvements to Live Partition Mobility (LPM) as requested via user groups and Requests For Enhancements (RFEs).  The latest release provides both performance and functional improvement to the LPM offering.  The following blog contains more detailed information about performance enhancements along with performance results.

Performance improvements
Over the years, the PowerVM team has delivered a series of improvements to the performance of LPM.  In 2016, IBM improved performance by using multiple Mover Server Partitions (MSPs) to migrate the data achieving up to a 2x improvement in performance.  In 2018 encryption/compression was introduced that not only encrypted the data that was transmitted but also provided up to a 15x improvement for slow networks by compressing the data.  Now, IBM is again improving LPM performance with changes in PowerVM 3.1.2 (VIOS 3.1.2, server firmware FW950 and HMC V9 R2 M950).  The following sections describe the changes in PowerVM to improve performance.

Automatic selection of MSP threads of execution
The LPM process can require a significant amount of processing to drive very high speed network connections.  The basic LPM steps for data transfer are:
- PowerVM Hypervisor determines the memory to copy
- Hypervisor compresses and encrypts the data using hardware accelerators
- Hypervisor copies the data to the MSP's buffers
- MSP transmits the data over the network connections
- MSP on target system receives the buffers from the network and sends them to the Hypervisor
- Hypervisor decrypts and decompresses the buffers, again using hardware accelerators
- Hypervisor copies the buffers to the target VMs storage
The majority of the work that is performed is done under threads (tasks) running in the MSPs.  To fully utilize the bandwidth on higher speed network connections, multiple threads are required.  For years, customers have had the ability to control the number of threads that are used for LPM operations by specifying the concurr_migration_perf_level option on the migrlpar command or by the specification of the concurrency_lvl attribute on the VIOS LPM pseudo device.  As described in How does PowerVM Development use your Call Home Data, IBM development has learned that very few customers are using these customization attribute.  To improve performance, starting with PowerVM 3.1.2 if the number of threads is not explicitly specified on the migrlpar or MSP pseudo device, the HMC will determine a concurrency level.  The number of threads will be based on the size of the VM being migrated, number of threads available and such.


Increase in the number of threads supported
Prior to PowerVM 3.1.2, the number of threads per migration operation per MSP was limited to 4 threads.  In performance testing with 100Gb network connections, IBM determined that more threads were required to utilize more bandwidth of the connection.  Starting in PowerVM 3.1.2, the HMC will automatically determine the number of threads, up to 8 threads per MSP, which can be utilized for the LPM operation. 

Since there is low adoption of the migrlpar concurr_migration_perf_level option and MSP pseudo device, these are still limited to specifying 4 threads per MSP.

Additional performance improvements
Multi-chip servers from all vendors can exhibit the effect of Non-Uniform Memory Accesses (NUMA).  Accessing memory that is not attached to the chip doing the processing is generally slower than local memory.  The PowerVM Hypervisor made changes in PowerVM 3.1.2 to reduce the NUMA effect by allocating the buffers used for LPM operations on the same chip as the MSPs whenever possible.

Another improvement that was made is when starting an LPM operation.  Since the inception of LPM, each VM being migrated is put into a hardware mode available on POWER processors that allows the hypervisor to track which pages are changed by the application.  This allows the hypervisor to maintain a coherent memory image when copying VM data from the source to target server.  Improvements were made in the hypervisor to reduce the time required to initialize this special hardware mode.  The improvement is especially noticeable for larger VMs.

Performance Results
The following LPM tests were performed on a pair of 64TB E980s servers.  Each server has 2 dual VIOS VMs (MSPs), each MSP has a dedicated 100Gb Ethernet connection for LPM.  The VM that was migrating was idle (no active workload running) to ensure consistency between measurements.  Since these were idle VMs, the compression ratios will be optimal and may result in migration times that may not be typical of actual customer environments.  The performance data however, does show the overall performance improvements of the automatic selection of number threads and additional changes.

VM Size FW940 without specification of concurrency level FW940 using concurrency level 1 (4 threads per MSP) FW950 with automatic selection of number of threads

49 GB 1:29 (1 minute 29 seconds) 1:12 1:12 (1.2x faster than FW940 without specification of concurrency level)
199 GB 2:04 1:24 1:25 (1.4x)
399 GB 2:56 1:39 1:38 (1.2x)
599 GB 4:18 2:31 2:06 (2.0x)
799 GB 5:19 2:43 2:12 (2.4x)
999 GB 5:55 3:04 2:16 (2.6x)
1400 GB 7:53 3:27 2:34 (3.1x)
1600 GB 8:53 3:49 2:38 (3.4x)
32 TB 2:25:43 50:28 24:39 (5.9x)
54 TB 4:15:19 1:23:48 41:50 (6.2x)
64 TB 4:39:44 1:35:33 48.41 (5.6x)

As shown by the performance results, there can be significant reduction in the time to perform LPM that increases as the size of the VM increases.

Looking forward

The PowerVM team has already started working on additional enhancements that have been requested though user groups and Requests for Enhancements (RFEs).  These again focus on additional functionality and performance improvements.

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions




0 comments
85 views

Permalink