PowerVM Shared Storage Pool Enhancements

By Rob Gjertsen posted Thu June 25, 2020 06:33 PM


New features for Shared Storage Pool

PowerVM continues to improve Shared Storage Pools, which simplifies cloud management and efficiency of storage usage.  PowerVM 2.2.4 includes the following SSP enhancements: 

  • Storage tiers within a Shared Storage Pool allow greater flexibility and control for quality of service, isolation, and redundancy of storage. The new storage tier facility allows up to 10 tiers of storage to be defined within a storage pool. This facility allows customer control over the creation of a storage tier and the placement of data on the proper class of storage.
  • The ability to dynamically grow a virtual disk has been added.

Background on Shared Storage Pools

One aspect of PowerVM is known as VIOS SSP, which stands for Virtual I/O Server, Shared Storage Pools. 

VIOS SSP allows a group of VIOS nodes to form a cluster and provision virtual storage to client lpars.  The VIOS nodes in the cluster all have access to the same underlying physical disks, which are grouped into a single pool of storage.  A virtual disk or LU can be carved out of that storage pool and mapped to a client lpar as a virtual SCSI device.  An LU may be thin or thickly provisioned, where thin provisioned LUs do not reserve blocks until they are written to, while thickly provisioned blocks reserve their storage when the LU is created. 

Once an LU has been created in the pool, snapshots or clones of that LU can be created.  The number of snapshots and clones created is limited only by the amount of available storage in the pool, and creating these objects happens nearly instantly.  Snapshots are used for rolling back to previous points in time.  Clones are used for provisioning new space efficient copies of an LU.  These clones are managed by PowerVC capture and deploy image management operations. 

These features allow rapid deployment of new client LPARs in a Cloud Computing environment.  The storage pooling model of VIOS SSP simplifies administration of large amounts of storage.  The clustering aspect of VIOS SSP provides fault tolerance between VIOS multipathing pairs, and simplifies verification that other nodes can see the storage and are eligible for LPAR mobility operations. 

Storage Tiers Use Cases

Storage tiers provide more control over virtual storage allocation within the pool.  Rather than treating the physical disks as a homogenous resource, storage tiers allow the disks to be classified and grouped by the administrator.  This could be useful for classifying storage based on performance characteristics so that, for example, someone could create a fast tier, medium tier, and slow tier.  If different LUs in the pool have different requirements regarding redundancy, each tier can be mirrored or not, as necessary.  In a configuration that doesn’t have multiple tiers, the administrator would need to decide whether the entire pool needs to be mirrored or not.  Storage tiers also allow for isolation and multi-tenancy use cases, where storage for lpars 1-3 may come from tier1 while storage for lpars 4-5 may come from tier2. 

Moving LUs between Tiers

The Live Storage Mobility feature for Storage Tiers allows an LU to be moved from one tier to another while the LU is actively being used.  This operation is transparent to the LPAR client.  When clone LUs are moved from one tier to another, the administrator can choose to maintain the space efficient block sharing between the parent LU and the clone LU.  They can also choose to break this space efficient block sharing and move the clone LU to a different tier than its parent.  Snapshots of an LU always move with the LU itself so that a rollback operation does not impact the quality of service, mirroring, or isolation of the LU. 

Storage Tiers – Frequently Asked Questions

  • How many tiers can I have in the storage pool? 
    • VIOS SSP supports a maximum of 10 tiers.  This includes the system tier and up to 9 user tiers. 
  • What is the purpose of a system tier? 
    • A system tier contains pool metadata and file metadata.  This includes special metadata files and databases used by the VIOS, and includes the indirect blocks which point to LU data blocks. 
  • What happens if a user tier encounters disk errors or runs out of space? 
    • These errors will only impact that specific tier. 
  • How much space do I allocate for the system tier?
    • For smaller sized pools (total capacity of under 20 TB), we would recommend 1% of the pool capacity. i.e. For a 10TB pool, create a system tier of 100GB. For larger pools the capacity of the system tier can be 0.3% of the total capacity. So for a 100TB pool, a system tier of 300GB would be sufficient.
  • What happens if the system tier encounters disk errors or runs out of space? 
    • These errors will impact the entire pool.  To help prevent this from happening, it is recommended that users always configure mirroring in the system tier.  It is also recommended that the system tier be set to “restricted” mode (aka “system” mode), where the system tier is used exclusively for metadata and no userdata LUs are assigned to the system tier.  This helps prevent the system tier from running out of space since the metadata in the pool is several orders of magnitude smaller than the userdata. 
  • What are the interfaces that allow me to administer storage pools, storage tiers, or LUs? 
    • There is a VIOS command line interface available in the padmin shell on each VIOS.  There is also an HMC GUI available which provides access to the same underlying operations.  There is also an HMC command line interface available which provides access to the same underlying operations.  Lastly, in order to make use of the cloning functionality of VIOS SSP, the PowerVC capture and deploy interfaces must be used.  
  • Is it required that I move from a non-tiered to a tiered model when I upgrade to the VIOS 2.2.4 release? 
    • No.  If you would prefer to leave your existing pools in a non-tiered configuration, or create new pools without tiers on your VIOS nodes, that is certainly fine. 


Prior to the PowerVM 2.2.4 release, the size of an LU was set at creation time and could not be adjusted later.  This meant that if additional capaciy is needed in the client lpar then an entirely new LU would need to be provisioned.  This limitation is now removed in PowerVM 2.2.4 so that an LU's size (i.e. capacity) can be increased dynamically.  This is more convenient and certainly matches up better with the way SAN administrators traditionally operate.  You can grow an LU at the VIOS command line or through HMC interfaces.  The syntax would be lu –resize –lu <luName> -size <totalNewSize>.  The –luudid (unique device id) can also be used to identify the LU.  The size specified by the caller is the new size of the LU (not the amount you wish to grow by).  There is no support for shrinking the size of an LU, so if the size specified is smaller than the existing size of the LU, the operation will fail.  If the LU is thickly provisioned, the newly added space in the LU will also be thickly provisioned.

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions