View Only

vNIC - Introducing a New PowerVM Virtual Networking Technology

By Chuck Graham posted Fri June 19, 2020 09:27 AM

vNIC (Virtual Network Interface Controller) is a new PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead resulting in lower latencies and less server resources (CPU, memory) required for network virtualization.
Up until now, PowerVM network virtualization has mostly relied on Shared Ethernet Adapter (SEA) in VIOS and virtual switch in the PowerVM Hypervisor to bridge the Virtual Ethernet Adapters (VEA) with the physical network infrastructure. While this approach provides great flexibility in enabling network connectivity for client LPARs, the SEA-based virtual networking solution incurs layered software overhead and multiple data copies from the time a packet is committed for transmission on VEA to the time the packet is queued on the physical NIC for transmission (same issues apply for receive packets). In the meantime, the PCI industry has developed the SR-IOV (Single Root I/O Virtualization and Sharing) standard for hardware-based virtualization technology. An SR-IOV adapter allows creation of multiple virtual replicas of a PCI function, called a Virtual Function (VF), and each VF can be assigned to an LPAR independently.  The SR-IOV VF operates with little software intervention providing superior performance with very little CPU overhead. The Host Ethernet Adapter (HEA) introduced with POWER6 based systems was an early implementation of such hardware virtualization solution. 
PowerVM vNIC backed SRIOV

In 2014, support was added for SR-IOV adapters for select models of POWER7+ systems and more recently for POWER8 based systems.  While a dedicated SR-IOV VF provides great performance advantage, this configuration does not allow Live Partition Mobility, which can be a major drawback. With this new technology, LPM is supported for SR-IOV VFs, which are assigned to vNICs.  This is made possible because the SR-IOV VF is assigned to the VIOS directly and is used by the client LPAR.  Since the SR-IOV VF or logical port resides in the VIOS instead of the client LPAR, the LPAR is LPM capable.

Figure 1 shows the key elements in the vNIC model. There is a one-to-one mapping or connection between vNIC adapter in the client LPAR and the backing logical port in the VIOS. Through a proven PowerVM technology known as LRDMA (logically redirected DMA), packet data for transmission (similarly for receive) is moved from the client LPAR memory to the SR-IOV adapter directly without being copied to the VIOS memory.  The benefits of bypassing VIOS on the data path are two-fold:
  1. Reduction in the overhead of memory copy  (i.e. lower latency)
  2. Reduction in the CPU and VIOS memory consumption (i.e. efficiency)
 Besides the optimized data path, the vNIC device supports multiple transmit and receive queues, like many high performance NIC adapters. These design points enable vNIC to achieve performance that is comparable to direct attached logical port, even for workloads dominated with packets of small sizes.  Figure 2 illustrates the control and data flow differences between the current Virtual Ethernet and the new vNIC support.
PowerVM Compare vETH and vNIC flows

In addition to the improved virtual networking performance, the client vNIC can take full advantage of the quality of service (QoS) capability of the SR-IOV adapters supported on Power Systems. Essentially, the QoS feature ensures that each logical port receives its share of adapter resources, which includes its share of the physical port bandwidth.  A vNIC combined with SR-IOV adapters provides the best of both quality of service and flexibility.

Link aggregation technologies such as IEEE 802.3ad/802.1ax Link Aggregation Control Protocol (LACP) and active-back approaches (e.g. AIX Network Interface Backup (NIB), IBM i VIPA, or Linux Active-Backup bonding mode) are supported for failover with some limitations.  In the case of LACP, the backing logical port must be the only VF on the physical port. This restriction is not specific to vNIC; it applies to the direct attached VF as well.  When using one of the active-backup approaches, a capability to detect a failover condition must be configured, such as an IP address to ping for AIX NIB.  The vNIC and the VEA backed by SEA can co-exist in the same LPAR.  At this time SEA failover is not supported but similar capability is planned for the future.

vNIC support can be added to a partition by simply adding a vNIC client virtual adapter to the  partition using the HMC. When adding a vNIC client, the user selects the backing SR-IOV adapter, the physical port, and the VIOS hosting the server devices, defines capacity, and other parameters, e.g., Port VLAN ID, VLAN access list, etc. Default settings are used if the user does not specify the parameter.  HMC creates all the necessary devices in the client LPAR as well as VIOS. The HMC supports configuration and control of vNIC configurations in the GUI, command line, or REST API. Note that most of the vNIC GUI support is available only via the HMC "Enhanced" GUI (not in the Classic view).  Figure 3 shows a snapshot of vNIC device listing in an LPAR via HMC Enhanced GUI view.  For vNIC removal, HMC does the cleanup in both LPAR and in VIOS.  So, from a user's perspective, they just need to deal with the client vNIC adapter and not be concerned with backing devices in normal cases as they are managed automatically by the HMC.
PowerVM HMC vNIC devices

During LPM or Remote Restart operations, the HMC handles the creation of the vNIC server and backing devices on the target system and cleanup of devices on the source system when LPM completes.  The HMC also provides auto-mapping of devices (namely selecting suitable VIOS and SR-IOV adapter port to back each vNIC device). The SR-IOV port label, available capacity, and VIOS redundancy are some of the items used by the HMC for auto mapping. Optionally users have the choice of specifying their own mapping manually.  

Minimum PowerVM & OS levels to support vNIC
  • PowerVM 2.2.4      VIOS Version 2.2.4
    • System Firmware Release 840
    • HMC Release 8 Version 8.4.0
  • Operating Systems
    • AIX 7.1 TL4 or AIX 7.2
    • IBM i 7.1 TR10 or IBM i 7.2 TR3
    • Note: Linux support to follow at a future date.
Contacting the PowerVM Team
Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions