Understanding PowerVM Hypervisor Memory Usage

By Pete Heyrman posted Mon June 08, 2020 10:20 AM

PowerVM Performance

The PowerVM hypervisor uses some of the memory activated in a Power server to manage memory that is assigned to individual partitions, manage I/O requests and in support of virtualization requests.  The amount of memory required by the hypervisor to support these features will vary based on various configuration options that have been chosen. The assignment of the memory to the hypervisor ensures secure isolation between logical partitions (LPARs) since the only allowed access to the memory contents is through security validated hypervisor interfaces.  The following information explains how the hypervisor uses the memory and ensures that the configuration options are correct for each situation.


Displaying the Amount of Memory Assigned to the Hypervisor

The amount of memory that is currently assigned to the hypervisor can be displayed from the Hardware Management Console (HMC) server properties tab.  The following is an example of the server memory properties information:

PowerVM Server Memory Example

In this example, the system has 32GB installed, 32GB licensed memory (Configurable), and 1.25 GB (Reserved) memory assigned to the hypervisor.


Components that Contribute to Hypervisor Memory Usage

The three main components that contribute to the overall usage of memory by the hypervisor are:

  1. Memory required for hardware page tables (HPT)
  2. Memory required to support I/O devices
  3. Memory required for virtualization


Memory Usage for Hardware Page Table (HPT)

Each partition on the system has its own hardware page table that contributes to hypervisor memory usage.  The HPT is used by the operating system to translate from effective addresses to physical real addresses in the hardware.  This translation from effective to real addresses allows multiple operating system to all run simultaneously in their own logical address space.  Whenever a virtual processor for a partition is dispatched on a physical processor, the hypervisor indicates to the hardware the location of the partition HPT that should be used when translating addresses.  The amount of memory for the HPT is based on the maximum memory size of the partition and the HPT ratio.  The default HPT ratio is either 1/64th of the maximum (for IBM i partitions) or 1/128th (for AIX, VIOS and Linux partitions) of the maximum memory size of the partition.  AIX, VIOS and Linux use larger page sizes (16K, 64K and such) instead of using 4K pages.  By using larger page sizes this reduces the overall number of pages that need to be tracked so the overall size of the HPT can be reduced.  As an example, for an AIX partition with a maximum memory size of 256GB, the HPT would be 2GB.

When defining a partition, the maximum memory size specified should be based on the amount of memory that can be dynamically add to the partition (DLPAR) without having to change the configuration and reboot the partition.  In addition to setting the maximum memory size, the HPT ratio can also be configured.  The hpt_ratio parameter on the chsyscfg HMC command can be issued to define the HPT ratio to be used for a partition profile.  Valid values are 1:32, 1:64, 1:128, 1:256 or 1:512.  Specifying a smaller absolute ratio (1/512 is the smallest value) will decrease the overall memory assigned to the HPT.  Testing is required when changing the HPT ratio because a smaller HPT may incur additional CPU consumption as the operating system may need to reload the entries in the HPT more frequently.  Most customers have chosen to use the IBM provided default values for the HPT ratios. 

Memory Usage for I/O devices

In support I/O operations, the hypervisor maintains structures called the Translation Control Entries (TCEs) which provides an information path between I/O devices and partitions.  The TCEs provide the address of the I/O buffer, indication of read versus write request and other I/O related attributes.  There are many TCEs in use per I/O device so multiple requests can be active simultaneous to the same physical device.  To provide better affinity, the TCE entries are spread across multiple processor chips/drawers to improve performance when accessing the TCEs.  For physical I/O devices, the base amount of space for the TCEs is defined by the hypervisor, based on the number of I/O devices that are supported.  Power8 and follow on servers, in support of high speed adapters, can also be configured to allocated additional memory to improve I/O performance.  Currently Linux is the only operating system that needs these additional TCEs so the memory can be freed up for use by partitions if the server is only using AIX and/or IBM i.  Follow these steps to check if a Power server has allocated additional memory for I/O devices:

  1. On the HMC, under the Operations section, select Launch Advanced System Management (ASM)
  2. Sign on to the Advanced System Management screen
  3. Under the System Configuration section, select I/O Adapter Enlarged Capacity
  4. A panel will be displayed that shows the number of slots to which additional I/O TCE space has been allocated.

The following is an example of the I/O Adapter Enlarged Capacity display where Enlarged capacity is enabled for 4 slots:

PowerVM ASMI I/O Adapter Enlarged Capacity


If additional I/O space is not required to be allocated for Linux high speed adapters, the next time the server is powered off, follow the previous steps to display the I/O Adapter Enlarged Capacity screen and uncheck the Enable I/O Adapter Enlarged Capacity checkbox.  The reduction in memory allocated by the hypervisor is dependent on the amount of installed memory and the number of slots that were currently configured.  For large memory systems, the savings can be significant.

In addition to TCE space that is set aside for physical I/O devices, the hypervisor also sets aside memory for virtual I/O devices.  The amount of memory reserved for TCEs is based on the number and type of virtual device that are configured. 

Memory Usage for Virtualization features

Virtualization requires additional memory to be allocated by the hypervisor for hardware state save areas and all the various virtualization technologies.  For example, on Power8 and later servers, each processor core supports up to 8 SMT threads of execution and each thread contains over eighty different registers.  The hypervisor must set aside save areas for the register contents for the maximum number of virtual processor that are configured.  The greater the number of physical hardware devices, the greater the number of virtual devices, the greater the amount of virtualization, the more hypervisor memory will be required.  For efficient memory consumption, desired and maximums for various attributes (processors, memory, virtual adapters) should be based on business needs and not set to values that are significantly higher than actual requirements. 

Active Memory Mirroring (AMM)

Active Memory Mirroring is available on some enterprise class servers and allows the server to optionally mirror the memory that is assigned to the hypervisor.  With AMM, the hardware, in conjunction with the hypervisor, maintains two identical copies of any hypervisor data.  This allows the hypervisor to continue to run even if a DIMM failure occurs.  Of course, enabling AMM will double the amount of memory assigned to the hypervisor. 

Predicting Memory usage

The IBM System Planning Tool (SPT) is a resource that can be utilized to estimate the amount of hypervisor memory that is required for a specific server configuration.  Once the SPT executable file is downloaded and installed, a configuration can be defined by selecting the appropriate hardware platform, selecting installed processors and memory, defining partitions and partition attributes.  Given a configuration, the SPT is able to estimate the amount of memory that will be assigned to the hypervisor.  This will assist when making changes to an existing configuration or when deploying new servers. 


Understanding and utilizing the concepts in this blog will help ensure efficient utilization of the memory resources available on Power servers. 

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions