AIX & PowerVM Server-side caching enhancements

By Rob Gjertsen posted Thu June 25, 2020 06:59 PM



Server-side caching is one of the powerful new features developed by IBM in AIX® operating system. The feature helps to significantly improve the performance of your I/O (storage) intensive workloads. With PowerVM support, it paves the way for system administrators and storage administrators to explore the feature. This blog focuses on PowerVM support for server-side caching on AIX operating system.


What is Server-side Caching

Server-side caching provides the capability to cache the storage data of your target disks from Storage Area Network (SAN) to solid state devices (SSD) on the server. Based on storage data access pattern of the application, the caching algorithm determines hot data and stores it in SSDs. All future read requests data are served from the cached data on the SSDs improving application read performance. Write operations are monitored by the caching software to invalidate from cache before beeing sent to the SAN.


The SSD devices used for caching could be on the server, i.e. built-in SSD disks or SSD disks in the I/O expansion drawers attached using Serial Attached SCSI (SAS) controllers, or SSD disks provisioned from SAN based flash storage. SSD disks can be provisioned to the AIX LPAR via the following three modes.

  1. Dedicated ModeIn this mode, cache devices are attached to the AIX LPAR. A cache pool can be created on the AIX LPAR on which cache partition may be created.
  2. NPIV Mode: In this mode, the cache device is available as a virtual fiber channel device on the AIX LPAR. A cache pool can be created on the AIX LPAR on which cache partition may be created.
  3. Virtual Mode: In this mode, the cache device is assigned to the VIOS. The remainder of this blog outlines the various functionalities and configurations available in Virtual mode supported by PowerVM.


Role of PowerVM

The devices used for caching are cache partitions carved out from a cache pool.  In Dedicated Mode & NPIV Mode, a cache pool is created in the AIX LPAR on the SSDs attached to the LPAR and the entire cache pool is dedicated only to the LPAR. Since cache pool is visible only on the AIX LPAR, the same cache pool can't be shared with other LPARs. Also other existing restrictions of LPM would apply if SSDs are provisioned via physical adapters to the LPAR. PowerVM added support for server-side caching by enabling virtualizing of cache partitions through the Virtual I/O Server. This mode is also known as Virtual Mode.


Details of Virtual Mode

SSD disks used for caching are provisioned to the PowerVM Virtual I/O Server (VIOS). A cache pool is created with the required SSDs on VIOS. This cache pool can be carved up into several cache partitions on the VIOS. Each cache partition can then be individually assigned to different virtual host (vhost) adapters. This mode provides a method to share the SSD disks on the VIOS amongst multiple LPARs, to be used for caching. This method helps to boost performance of workloads running across the AIX LPARs which have been assigned a cache partition. The assigned cache partitions are discovered as cachedisk's in each of the individual AIX LPARs where this feature is supported.


On the AIX LPAR, these cachedisks can be directly assigned to desired target disks, and caching started for the target disks. The figure below shows a view of virtual mode caching (AIX LPAR1 and LPAR2 are using cachedisk provisioned from a cachepool of VIOS):



For administering the cache pool and cache partitions for virtual mode, a new command, cache_mgt, is supported in Virtual I/O Server (starting with VIOS version More details about the cache_mgt command on the VIOS are available in IBM Knowledge Center at the following link:



Live Partition Mobility of an LPAR using cache partitions

Live Partition Mobility (LPM) is supported for LPARs that have cache partitions carved out from SSDs on the Virtual I/O Server. During migration of an AIX LPAR with virtual mode caching enabled, caching for target disks is stopped on the source CEC prior to migration. Caching on the LPAR is then automatically restarted after it reaches the destination CEC. Once caching is restarted on the destination CEC, cachedisk contents are repopulated post migration of the LPAR.


In order for LPM to restart caching on the destination CEC, a cache pool must already exist on a Virtual I/O Server (VIOS) of the destination CEC. If a cache pool exists on the destination VIOS, LPM logic takes care of creating cache partitions of the required size from the available cache pool on the destination VIOS and assigns them to the LPAR during migration. If a cache pool does not exist, the user needs to create a cachepool in the VIOS of the destination CEC.


By default, LPM proceeds even if cache partitions cannot be provisioned from the VIOS of the destination CEC. This default behavior can be overridden with cache_required tunable of each cache partition.


Details of cache_required tunable

Each cache partition on the source VIOS is associated with a tunable which can be set to yes or no value. If set to yes, failure to provision such cache partition on the destination CEC would abort the migration of the LPAR. If set to no, LPM proceeds even when unable to provision cache partition on the destination CEC. In the latter case target devices are not cached post migration of the LPAR.

Below is the usage of setting this tunable.

cache_mgt mig set -r {yes | no} -P <partitionName>

To list the current tunable value:

cache_mgt mig get -r [-P <partitionName>]

The following figures depicts the various stages of LPM of a LPAR when server-side caching is active.


Before LPM

The picture below shows HOT blocks of hdisk0 & hdisk1 that are cached on cachedisk0.

cachedisk0 is actually a cache partition which is carved out from a cache pool present on the VIOS and assigned to vhost0 adapter.

During LPM

When LPM of the LPAR is initiated from a source CEC to destination CEC, caching for the target disks is stopped and cachedisk0 is removed automatically by the LPM logic. This is only a temporary state, and users may not be aware of this.



LPM automatically creates new cache partition of the same size from an existing cache pool present in the destination CEC’s VIOS, and assigns the cache partition to the client LPAR. LPM discovers the newly assigned cache partition and starts caching for all the target devices for which caching was active on the source CEC. Re-population of HOT blocks to the cachedisk begins once the LPM process starts caching for the target disks.


LPM behavior for special Use Cases

  1. An attempt to migrate a LPAR to a CEC where none of the VIOSes are at the level that support caching: The migration operation is supported and continued unless the "cache_required" tunable is set to yes.  
  2. An attempt to migrate a LPAR to a CEC where a cache pool on any of the VIOSes doesn’t have enough space to accommodate the cache partition from the source CEC: The migration operation is supported and continued unless the "cache_required" tunable is set to yes.
  3. An attempt to migrate a LPAR to a CEC where there are multiple VIOSes which have their respective cache pools: LPM operation will choose one of the VIOSes which can accommodate the required cache partition.

AIX & PowerVM versions that support server-side caching

Server-side caching was introduced in AIX version 7.2 and is also available on AIX 7.1 with Technology Level 4 Service Pack 2 (7100-04-02).

PowerVM or later is required.

Links to Additional Info

AIX Knowledge Center section detailing the concepts, configuration and management of server-side caching is available here:



Server-side caching on AIX LPAR should significantly improve the performance of your storage-read intensive workloads, such as OLTP. PowerVM support of 'virtual mode caching' and 'Live Partition Mobility' greatly expands the scope where the feature can be leveraged while making it easier to provision and use.

Explore the advantages of this feature and benefit from the boost in performance of your workloads.

Contacting the PowerVM Team
Have questions for the PowerVM team or want to learn more? Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions