Virtual Storage Redundancy with dual VIOS Configuration

By Robert Kovacs posted Wed July 22, 2020 11:38 AM



I/O Virtualization is one of the founding pillars of PowerVM. Virtual IO Server (VIOS) is a software appliance in PowerVM that facilitates virtualization of storage and network resources. Physical resources are associated with the Virtual I/O Server and these resources are shared among multiple client logical partitions (a.k.a. LPARs or VMs).

Since each Virtual I/O Server partition owns physical resources, any disruptions in sharing the physical resource by the Virtual I/O Server would impact the serviced LPARs. To ensure client LPARs have uninterrupted access to their I/O resources, it is necessary to set up a fully redundant environment. Redundancy options are available to remove the single point of failure anywhere in the path from client LPAR to its resource.
Fundamentally, the primary reasons for recommending VIOS and I/O redundancy include:

  • Protection against unscheduled outages due to physical device failures or natural events
  • Outage avoidance in case of VIOS software issue (i.e. including VIOS crash)
  • Improved serviceability for planned outages
  • Future hardware expansion
  • Protection against unscheduled outages due to human intervention

Role of Dual Virtual I/O Server

Dual VIOS configuration is widely employed, and it is recommended for enterprise environments. A dual VIOS configuration allows the client LPARs to have multiple routes (two or more) to their resources. In this configuration, if one of the routes is not available, the client LPAR can still reach its resources through another route.

These multiple paths can be leveraged to set up highly available I/O virtualization configurations, and it can provide multiple ways for building high-performance configurations. All this is achieved with the help of advanced capabilities provided by PowerVM (VIOS and PHYP) and the operating systems on the client LPARs.
Both HMC and NovaLink allow configuration of dual Virtual I/O Server on the managed systems.

The remainder of this blog focuses on approaches to achieve virtual storage redundancy for client LPARs.

Enhanced storage availability

Below are the details on various possible configurations for providing enhanced storage availability to client partitions. PowerVM offers three  primary modes for virtualizing the storage to client LPARs,

  1. Virtual SCSI  (vSCSI)
  2. N-Port ID Virtualization (NPIV)
  3. Shared Storage Pool (SSP)

Redundancy in Virtual SCSI (vSCSI)

vSCSI allows Virtual I/O Server to drive the client LPARs I/O to the physical storage devices. For more information, see Virtual SCSI on the IBM Knowledge Center.

Protection against Physical Adapter Failure

Figure 1


The basic solution against physical adapter failure is to have two (or more) physical adapters preferably from multiple different I/O drawers to the Virtual I/O Server. Storage needs to be made accessible via both physical adapters. MPIO capability on the VIOS can be leveraged to configure the additional physical paths in fail-over mode. Figure 1 shows storage connectivity to client LPAR made available via both paths in the VIOS.

To effectively leverage the capacity of both adapters, this configuration can be fine tuned by specific Multi-Path I/O (MPIO) settings to share the load across these paths. This can result in better utilization of resources on the system.

Protection against VIOS outage (planned or unplanned)

 Figure 2


VIOS restart may be required during VIOS software updates, which can result in VIOS not being available to service dependent LPARs while it is rebooting. A dual VIOS setup alleviates the loss of storage access for any planned or unplanned VIOS outage scenarios.
In this kind of architecture, the client LPARs are serviced via two VIOS partitions (i.e. dual VIOS). One VIOS acts as the primary server for all client requests and another VIOS acts as the secondary/backup server. The backup server services the client only when the primary server is not available to service the client requests. This kind of arrangement is achieved with the help of Storage multi-pathing on client LPARs.

On the client LPARs running the AIX operating system, multi-pathing is achieved by using MPIO Default Path Control Module (PCM). MPIO manages routing of I/O through available paths to a given disk storage (Logical Unit). For more information on MPIO, see  Multiple Path I/O on the IBM Knowledge Center.

Protection against Disk/Storage Array Failures

 Figure 3


The basic solution to protect against disk failures is to keep a mirrored copy of disk data onto another disk. This can be achieved by the mirroring functionality provided by the client operating system; in the case of AIX disk mirroring is provided by Logical Volume Manager (LVM). For high-availability, each mirrored data should be located on separate physical disks, using separate I/O adapters coming from different VIOS. Furthermore, putting each disk into a separate disk drawer protects against data loss due to power failure.

  1. It is possible to access both primary and mirrored disks through single VIOS.
  2. RAID array is another method to protect against disk failures.

High redundancy system with dual VIOS

  Figure 4


In order to achieve a highly redundant system, all the solutions discussed above are combined to derive an end-to-end redundancy solution.

Here, the client LPAR sees two disks, one of which is used as mirrored disk to protect against disk failure. Each disk seen on client LPAR has paths from two different VIOS, which ensures protection in case of VIOS failure. Also, each VIOS has two physical adapters to provide redundancy in case of physical adapter failure. Though this arrangement is good from redundancy perspective, it is certainly not the most efficient one, since one VIOS is used for backup purpose only and not utilized completely. To effectively utilize all the available VIO Servers, the VIOS load can be shared across these available VIO Servers. This configuration is explained in the next section with the help AIX client LPARs.

High redundancy on AIX client LPARs with dual VIOS

For effective utilization of VIO Servers and their resources, we can create a system where one VIOS acts a primary VIOS for one-half of the serviced client LPARs and acts as a secondary VIOS for another half of the serviced client LPARs. Similarly, the second VIOS can act as a primary VIOS for one-half of the client LPARs and act as a secondary VIOS for another half of the client LPARs. This arrangement is illustrated in Figure 5.

Figure 5


Here, we have two LPARs (LPAR 1 and LPAR 2) which are being serviced by VIOS 1 and VIOS 2. The client partition LPAR 1 is using the VIOS1 as an active path and VIOS2 as passive path to reach its Disk A. Similarly, LPAR2 is using VIOS 2 as primary path and VIOS 1 as secondary path to reach Disk B. An important thing to note here is that for the mirrored disk, the configurations are just opposite i.e. on LPAR 1 active path is VIOS 2 and passive path is VIOS 1 for mirrored disk A’ and on LPAR 2 active path is VIOS 1 and passive path is VIOS 2. The active and passive/backup VIOS is designated based on the path priority set for the disk for the active I/O path.

Using this configuration, we can shut down one of the VIOS for a scheduled maintenance and all active clients can automatically access their disks through the backup VIOS. When the Virtual I/O Server comes back online, no action is needed on the virtual I/O clients.

Common practice is to use mirroring in the client LPARs for rootvg disks and the datavg disks are protected by RAID configuration provided by the storage array.

RAID stands for Redundant Array of Independent Disks and its design goal is to increase data reliability and increase input/output (I/O) performance. When multiple physical disks are set up to use the RAID technology, they are said to be in a RAID array. This array distributes data across multiple disks but from the computer user and operating system perspective, it appears as a single disk.

Redundancy in NPIV

N_Port ID Virtualization (NPIV) is a method for virtualizing physical fiber channel adapter ports to have multiple virtual World Wide Port Numbers (WWPNs) and therefore multiple N_Port_IDs. Once all the applicable WWPNs are registered with FC switch, each of these WWPNs can be used for SAN masking/zoning or LUN presentation.

More information on NPIV is available at IBM Knowledge center: Virtual Fibre Channel

Redundancy with single VIOS and dual VIOS

Similar to vSCSI, redundancy can be built with NPIV mode of virtualization. Details are shown in Figure 6 below.

Figure 6

Above figure shows that redundancy against physical adapter failure can be achieved by having one more physical fiber channel HBA and redundancy against VIOS failure can be achieved by having a redundant path through another VIOS.
Note: As the storage is directly mapped from SAN for the client LPAR, in order to have protection against physical adapter failures, all of the virtual WWPN/N_Port_IDs of the client LPAR should be zoned/masked for the same storage on the SAN. Additional adapters and paths cannot guarantee redundancy unless zoning is done properly. Similar to vSCSI, active and passive/failover path is managed by multi-path software on client LPAR.  


Subtle differences between vSCSI and NPIV based virtualization:

  1. In vSCSI, multipathing software is typically controlling paths in both the client LPARs as well as the Virtual I/O Servers.
  2. In NPIV, multi-pathing is at the client level and VIO Server is largely a pass-through.

High redundancy system with dual VIOS

Generally, in the case of a dual VIOS redundancy setup, each VIOS is configured with at least two Virtual Fiber Channel adapters, each backed by an independent physical fiber channel adapter. Each of these physical adapters is connected to separate switch to provide redundancy against switch failure. Each client partition is configured with four Virtual Fiber Channel adapters, two of which are mapped to Virtual Fiber Channel adapters on one VIOS and the other two mapped to Virtual Fiber Channel adapters on the other VIOS. Now the client will have four WWPN and four N_Port_IDs. In order for client LPAR to see same storage from all these ports, zoning has to be done on the SAN. Multi-path software on the client LPAR takes care of routing IO through passive path if the active path fails.

In the case of VIOS serving multiple client LPARs, the workload can be spread across all the available VIO Servers and I/O adapters as shown in Figure 7 below.

Figure 7


As shown above, client LPAR 1 and LPAR 2 have paths from both the VIO Servers, i.e. VIOS 1 and VIOS 2, to reach their respective storage disks/LUNs. LPAR 1 is using VIOS 1 as active path and VIOS 2 as the passive path, while LPAR 2 is using VIOS 2 as active and VIOS 1 as the passive path. In this setup, if one VIOS is down for maintenance or the active path is unable to route the traffic, the multi-path software running on the client LPAR will take care of routing the IO through other available passive path.

One important thing to note in this configuration is that each VIOS has two paths through it and each one of these paths is on a separate fabric. If there is a switch failure, the client will failover to the other path in the same VIOS and not to the other VIOS.

Shared Storage Pool (SSP)

VIOS SSP provides shared storage virtualization across multiple systems through the use of shared storage and clustering of Virtual I/O Servers. It’s an extension of PowerVM’s existing storage virtualization technique using VIOS and vSCSI. SSP aggregates the heterogeneous set of storage devices into pools or tiers. Tiering provides administrators the ability to segregate user data/storage based on their desired requirement/criteria (aka service-level agreement(SLA) ). Virtual devices are carved out from pool/tier and mapped to client LPARs via vSCSI mode of virtualization. All the above redundancy configurations available through vSCSI are still valid for SSP, as SSP provides the same standard vSCSI Target interface to client LPARs. Figure 8 shows a Dual Virtual I/O Server configurations across each CEC.

Figure 8

Above figure shows that a single storage pool spans across multiple VIO Servers and multiple systems, thus enabling location transparency.

More information on SSP is available at following locations:

Failure Groups in SSP:

To tolerate disk failures in pool/tier, SSP offers the creation of failure groups. Failure group let users segregate disks into groups, possibly belonging to different failure characteristics and mirror the data across both groups. Each tier in the SSP can have up to a maximum of two failure groups. Failure group can be added to SSP tier anytime. Figure 9 shows three tiers in the SSP of which only two tiers are mirrored.

Figure 9


SSP configuration depends heavily on networking infrastructure to facilitate communication between all participating VIO Servers nodes in the SSP cluster. If a node is not able to communicate will all the other nodes in the SSP cluster that node will the temporarily expelled from the cluster.
We will discuss VIOS redundancy practices related to enhanced network availability in a future blog.

Additional Information

More information on IBM PowerVM Virtualization and Configuration is available at the following locations:

More information on VIOS redundancy considerations is available at IBM Knowledge center: Redundancy considerations

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more? Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions