PowerVM

vNIC and vNIC Failover FAQs 

Thu July 16, 2020 04:34 PM

PowerVM vNIC and vNIC Failover FAQs

PowerVM introduced vNIC (virtual Network Interface Controller) support in 2015 on POWER8 systems. Given the strong customer interest in this technology, we compiled answers to frequently asked questions below.  PowerVM enhanced vNIC in 2016 with vNIC Failover support.

vNIC and vNIC Failover FAQ list

 

vNIC and vNIC Failover FAQs



"What are the requirements for vNIC?"

POWER8 Servers

  • IBM Power System E870 (9119-MME), IBM Power System E880 (9119-MHE),
    IBM Power System S824 (8286-42A), IBM Power System S814(8286-41A),
    IBM Power System S822(8284-22A), IBM Power System S824L(8247-42L),
    IBM Power System S822L (8247-22L), IBM Power System S812L(8247-21L),
    IBM Power System E850 (8408-E8E).
     
  • vNIC is also supported when the SR-IOV enabled adapter is in the PCIe Gen3 I/O expansion drawer
  • (2 SR-IOV slots per Fan-out Module )
  • PowerVM 2.2.4
    • VIOS Version 2.2.4
    • System Firmware Release 840
      • IBM Power System E850 (8408-E8E) requires system firmware 840.10
    • HMC Release 8 Version 8.4.0
  • Operating Systems
    • AIX 7.1 TL4 or AIX 7.2
    • IBM i 7.1 TR10 or IBM i 7.2 TR3

POWER9 Scale-out Servers

  • IBM Power System S922(9009-22A), IBM Power System S914 (9009-41A)
    IBM Power System S924 (9009-42A), IBM Power System H922 (9223-22H),
    IBM Power System H924 (9223-42H), IBM Power System L922 (9008-22L)
     
  • SR-IOV adapters also supported in the PCIe Gen3 I/O expansion drawer
  • System Firmware FW910.00 or later
  • HMC Version / Release: 9.1.910.0 or later
  • VIOS 2.2.6.21 or later
  • Operating systems
    • IBM i 7.2 TR8 and 7.3 TR4 or later
    • AIX
      • AIX Version 7.2 with the 7200-02  Technology Level and Service Pack  7200-02-02-1810 or later
      • AIX Version 7.1 with the 7100-05 Technology Level  and Service Pack 7100-05-02-1810 or later
    • Linux
      • SUSE Linux Enterprise Server 12, Service Pack 3, or later, with all available maintenance updates from SUSE
        • Customers cannot perform a network install with ibmvnic on SLES 12 SP3. The install image for SP3 does not contain the updates needed for this to work. IBM is advising customers to install with virtual Ethernet or a dedicated adapter, upgrade to the supported kernel, then create a vNIC adapter.
      • SUSE Linux Enterprise Server 15, or later, with all available maintenance updates from SUSE
      • Red Hat Enterprise Linux 8.1 or later, with all available maintenance updates from Red Hat

POWER9 Scale-Up Servers

  • IBM Power System E950 (9040-MR9,
    IBM Power System E980 (9080-M9S)
     
  • SR-IOV adapters also supported in the PCIe Gen3 I/O expansion drawer
  • System Firmware FW920.20 or later
  • HMC Version / Release: V9 R1.921.0 or later
  • VIOS
    • VIOS 2.2.6.31 or later
    • VIOS 3.1 or later
  • Operating systems
    • IBM i 7.2 TR9 and 7.3 TR5 or later (E980 only)
    • AIX
      •  
      • AIX Version 7.2 with the 7200-03 Technology Level or later
      • AIX Version 7.1 with the 7100-05 Technology Level and Service Pack  7100-05-03-1838 or later
      • AIX Version 7.2 with the 7200-01 Technology Level and Service Pack 7200-01-05-1845 or later
      • AIX Version 7.2 with the 7200-02 Technology Level and Service Pack  7200-02-03-1845 or later
      • AIX Version 7.1 with the 7100-04 Technology Level and Service Pack 7100-04-07-1845  or later
    • Linux
      • SUSE Linux Enterprise Server 12, Service Pack 3, or later, with all available maintenance updates from SUSE
        • Customers cannot perform a network install with ibmvnic on SLES 12 SP3. The install image for SP3 does not contain the updates needed for this to work. IBM is advising customers to install with virtual Ethernet or a dedicated adapter, upgrade to the supported kernel, then create a vNIC adapter.
      • SUSE Linux Enterprise Server 15, or later, with all available maintenance updates from SUSE
      • Red Hat Enterprise Linux 8.1 or later, with all available maintenance updates from Red Hat



Is a partition with vNIC client virtual adapters a candidate for LPM and remote restart?

Yes.



How do the network virtualization technologies for Ethernet adapters compare?

Technology

Live Partition Mobility

Quality of service (QoS)

Direct access perf.

Link Aggregation

Requires VIOS

Server side redundancy

SR-IOV

No1

Yes

Yes

Yes2

No

No

vNIC

Yes

Yes

No3

Yes2

Yes

Yes4

SEA/vEth

Yes

No

No

Yes

Yes

Yes

IVE/HEA

No

No

Yes

Yes

No

No

Notes:

  1. SR-IOV can optionally be combined with VIOS and virtual Ethernet to use higher-level virtualization functions like Live Partition Mobility (LPM); however, client partition will not receive the performance or QoS benefit.
  2. Some limitations apply.  See FAQ on link aggregation
  3. Generally better performance and requires fewer system resources when compared to SEA/virtual Ethernet
  4. Requires vNIC Failover

 



What I/O adapters support vNIC?

All SR-IOV adapters available on POWER8 and POWER9 systems can be a backing device for a vNIC client.  The adapter must be in SR-IOV shared mode prior to creating a client vNIC virtual adapter.

POWER8 system adapters

vNIC Supported SR-IOV Capable Network I/O Adapters

Low profile - multi OS

Full high - multi OS

Low profile - Linux only

Full high - Linux only

PCIe2 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ45

EN0J

EN0H

EL38

EL56

PCIe2 4-port (2x10GbE+2x1GbE) copper twinax and RJ45

EN0L

EN0K

EL3C

EL57

PCIe2 4-port (2x10GbE+2x1GbE) LR Optical fiber and RJ45

EN0N

EN0M

n/a

n/a

PCIe3 4-port 10GbE SR optical fiber

EN16

EN15

n/a

n/a

PCIe3 4-port 10GbE copper twinax

EN18

EN17

n/a

n/a

 

POWER9 system adapters

SR-IOV Capable Network I/O Adapters

Low profile - multi OS FC

Full high - multi OS FC

PCIe2 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ45

EN0J  

EN0H

PCIe2 4-port (2x10GbE+2x1GbE) copper twinax and RJ45

EN0L

EN0K

PCIe3 4-port 10GbE SR optical fiber

EN16  

EN15

PCIe3 LP 2-Port 10Gb NIC&ROCE SR/Cu Adapter

EC2R1

EC2S1

PCIe3 LP 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter

EC2T1

EC2U1

PCIe3 LP 2-port 100GbE NIC& RoCE QSFP28 Adapter x16

EC3L1

EC3M1

PCIe4 LP 2-port 100GbE NIC & RoCE QSFP28 Adapter x16 EC671 EC661

Notes:

  1. No RoCE support with vNIC.



How many vNIC clients are supported per SR-IOV adapter?

The table below indicates the number of vNIC backing devices each physical port and adapter support.  When only one backing device is configured from the adapter for a vNIC client this is the number of vNIC clients the physical port and adapter support. If more than one backing device is configured from the same adapter the number of vNIC clients the adapter supports is reduced.

SR-IOV Capable Network I/O Adapters

Feature codes

Physical port link speed

# of vNIC backing devices per physical port

# of vNIC backing devices per adapter

PCIe2 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ45

EN0J, EN0H, EL38, EL56

1Gb

4

48

10Gb

20

PCIe2 4-port (2x10GbE+2x1GbE) copper twinax and RJ45

EN0L, EN0K, EL3C, EL57

1Gb

4

48

10Gb

20

PCIe2 4-port (2x10GbE+2x1GbE) LR Optical fiber and RJ45

EN0N, EN0M

1Gb

4

48

10Gb

20

PCIe3 4-port 10GbE SR optical fiber

EN16, EN15

10Gb

16

64

PCIe3 4-port 10GbE copper twinax

EN18, EN17

10Gb

16

64

PCIe3 LP 2-Port 10Gb NIC&ROCE SR/Cu Adapter

EC2R, EC2S

10Gb

40

80

PCIe3 LP 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter

EC2T, EC2U

25/10Gb

40

80

PCIe3 LP 2-port 100GbE NIC&RoCE QSFP28 Adapter x16

EC3L, EC3M

40/100Gb

60

120

PCIe4 LP 2-port 100GbE NIC & RoCE QSFP28 Adapter x16 EC66, EC67 40/100Gb 60 120

 



Is there a limit on the number of client vNIC adapter for a partition?

Yes, the limit in FW840.00 is 6 client vNIC adapters per partition.  With FW840.10 this was increased to 10 client vNIC adapters per partition.



Does vNIC require an SR-IOV adapter configured in SR-IOV shared mode?

Yes, to configure a vNIC client there must be an adapter configured in SR-IOV shared mode prior to configuration of a vNIC client.  In addition there must be available logical ports and available physical port capacity (i.e. total of activated logical port capacity values for the physical port must be less than 100%).



For LPM, is an SR-IOV adapter in shared mode required on the target system if the partition to be moved is configured with a client vNIC adapter?

Yes, the target system must have an adapter in SR-IOV shared mode with an available logical port and available capacity on a physical port.



Do I need to configure an SR-IOV logical port and vNIC server virtual adapter to use vNIC?

No, when a partition is activated with a client vNIC virtual adapter or when a client vNIC virtual adapter is dynamically added to a partition, the HMC and platform firmware will create the vNIC server and the SR-IOV logical port backing device and dynamically at them to the VIOS.



Do I need to unconfigure an SR-IOV logical port and vNIC server virtual adapter when I dynamically remove a client vNIC adapter?

Generally no, when the client vNIC adapter is removed the platform will dynamically remove the SR-IOV logical port and vNIC server from the VIOS.  If the DLPAR remove of the client vNIC adapter would fail in the middle, the user may be required to remove the SR-IOV logical port and/or the vNIC server.  If the client vNIC adapter is configured its SR-IOV logical port and vNIC server cannot be removed.



Can I configure client vNIC virtual adapters using the hardware management console (HMC) classic GUI?

The HMC classic GUI does not support configuration of a vNIC client virtual adapter.  A vNIC client virtual adapter may be configured using the HMC enhanced+ GUI or REST APIs.



Can SR-IOV logical ports assigned to a partition (non-vNIC) and SR-IOV logical ports configured as backing devices for client vNIC adapters share the same SR-IOV adapter physical port?



Can I mix vNIC and non-vNIC on the same physical port?

Yes, you can mix SR-IOV logical port used for vNIC and non-vNIC on the same physical port.



Is link aggregation of vNIC clients supported?

Yes, if a vNIC client has a single backing device.  A vNIC client with multiple backing devices (vNIC Failover) in combination with link aggregation technologies such as IEEE802.3ad/802.1ax (LACP), AIX Network Interface Backup (NIB), or Linux bonding active-backup mode is not supported.  SR-IOV link aggregation limitations apply to client vNIC adapters. (see SR-IOV Frequently Asked Questions).



Are there any limitations to consider when configuring a vNIC adapter Port VLAN ID (PVID)?

Yes, depending on the backing device feature code there may be limitations.

  • For adapters with feature codes EC2R, EC2S, EC2T, EC2U, EC3L, EC3M the vNIC adapter VLAN ID Restrictions must be set to Deny All when a non-zero Port VLAN ID is configured.  This means operating system VLAN tagging is not supported (frames are dropped) if a non-zero Port VLAN ID is configured. If VLAN tagging with more than one VLAN ID is required,  set the logical port Port VLAN ID to zero and configure the operating system to apply the VLAN tags.



Is SEA failover supported with vNIC?

No, but vNIC Failover provides a similar capability (see What is vNIC Failover?)



What is vNIC Failover?

VNIC Failover is vNIC with multiple backing devices for redundancy (analogous to SEA Failover).  vNIC Failover allows a vNIC client to be configured with up to 6 backing devices.  One backing device is active while the others are inactive standby devices. If the Power Hypervisor detects the active backing device is no longer operational a failover is initiated to the most favored (lowest Failover Priority value) operational backing device.



What are the minimum requirements for vNIC Failover?

POWER8 Servers

  • SR-IOV capable adapter in SR-IOV shared mode
  • PowerVM
    • VIOS Version 2.2.5.0
    • System Firmware Release 860.10
    • HMC Release 8 Version 8.6.0 with mandatory PTF
  • Operating Systems
    • AIX 7.1 TL4 or AIX 7.2
    • IBM i 7.1 TR11 or IBM i 7.2 TR3

POWER9 Servers

  • SR-IOV capable adapter in SR-IOV shared mode
  • System Firmware FW910.00 or later
  • HMC Version / Release: 9.1.910.0 or later
  • VIOS 2.2.6.21 or later
  • Operating systems
    • IBM i 7.2 TR8 and 7.3 TR4 or later
    • AIX
      • AIX Version 7.2 with the 7200-02  Technology Level and Service Pack  7200-02-02-1810 or later
      • AIX Version 7.1 with the 7100-05 Technology Level  and Service Pack 7100-05-02-1810 or later
    • Linux
      • SUSE Linux Enterprise Server 12, Service Pack 3, or later, with all available maintenance updates from SUSE
        • Customers cannot perform a network install with ibmvnic on SLES 12 SP3. The install image for SP3 does not contain the updates needed for this to work. IBM is advising customers to install with virtual Ethernet or a dedicated adapter, upgrade to the supported kernel, then create a vNIC adapter.
      • SUSE Linux Enterprise Server 15, or later, with all available maintenance updates from SUSE



What is a vNIC backing device?

A vNIC backing device consists of a SR-IOV shared mode enabled adapter physical port, a SR-IOV logical port, a vNIC server virtual adapter, and a Virtual I/O Server (VIOS).  During configuration of a vNIC backing device the user selects a VIOS and adapter physical port.  When the backing device is instantiated the management console creates a vNIC server and a logical port for the physical port on the VIOS.  Once created, the backing device is available as either an active or standby backing device for its vNIC client.  The logical port and vNIC server adapter are associated with a single vNIC client.

Configuration of a vNIC backing device includes the ability to select a Capacity % and Failover Priority.  The Capacity % is the users desired minimum percent of the physical port’s resources, including bandwidth, to associate with the logical port when the backing device becomes the active backing device.  The Failover Priority is used by the Power Hypervisor during failover to select the new active backing device.  On failover the Power Hypervisor select the new active backing device to be the backing device with the lowest Failover Priority number that is operational.



Can backing devices be dynamically added to or removed from a vNIC client?

Yes, backing devices can be dynamically added and removed to a vNIC client.



What is the backing device Priority Failover value and how is it used?

The Failover Priority is a number between 1 and 100 and is used by the Power Hypervisor during failover to select the new active backing device.  On failover the Power Hypervisor selects the backing device with the lowest Failover Priority number that is operational.



What are the Power Hypervisor and VIOS memory requirements for vNIC and vNIC Failover?

The following table provides estimates for memory usage by the Power Hypervisor and VIOS for vNIC and vNIC Failover.  Actual memory usage may vary by configuration.

Adapters FCs

Hypervisor  memory per adapter

Hypervisor  memory per vNIC client

Hypervisor  memory per vNIC backing device

VIOS memory per vNIC backing device

EN0H, EN0J, EN0K, EN0L, EN0M, EN0N, EN15, EN16, EN17, EN18

160MB

9MB

0.7MB

7.5MB

EC3L, EC3M,

3.7GB

9MB

0.7MB

25MB

EC2T, EC2U, EC2R, EC2S

2.9GB

9MB

0.7MB

25MB

 



What are the VIOS CPU usage requirements for vNIC and vNIC Failover?

In general CPU utilization for vNIC traffic flowing through the VIOS is workload dependent and will vary based on type of network traffic.  As a rough guideline for peak bandwidth (large packets) workloads a value of 0.7 additional VIOS cores per 10Gb/s of bandwidth can be used.  Workloads with high message rates and small packets may require additional VIOS CPU utilization.

 



What is the vNIC Auto Priority Failover option?

When a vNIC client is configured with multiple backing devices the user has the option to determine what action the Power Hypervisor will take when a backing device with a more favored Failover Priority becomes operational.  This option is Auto Priority Failover.  When Auto Priority Failover is enabled the Power Hypervisor will initiate a failover to a more favored operational backing device if one becomes available, even if the current active backing device is operational.  If Auto Priority Failover is disabled the Power Hypervisor will not initiate a failover as a result of a more favored operational backing device becoming available. 

Disabling Auto Priority Failover can reduce failovers if a backing device is frequently moving between operational and not operational.



Where can I get additional information on vNIC?



Are there any known issues with vNIC?

  • Linux

  Description:

If Kdump is configured to copy crash dumps to a remote location over a VNIC adapter, dump capture may fail as a result of carrier loss and device failover operations during device driver probe.

               Workaround:

Workaround is to dump to an alternate device, such as local disk, SAN disk, or transmit over a virtual ethernet adapter.

 

Reference: SUSE bugzilla 1115428

 

  • AIX NIB configuration issue

Description:

A vNIC device in an AIX Network Interface Backup (NIB) etherchannel configuration might not be able to communicate with other SR-IOV logical ports or vNICs in other logical partitions on the same system.

AIX APARs: IV77944, IV80034, IV80127, IV82254, IV82479

Resolution:

Fix available, see Fix pack information for: NIB ETHERCHANNEL WITH SR-IOV VF PORT PROBLEM at http://www-01.ibm.com/support/docview.wss?uid=isg1fixinfo159817

 

  • IBM i vNIC VLAN restrictions issue

Description:

An IBM i logical partition vNIC configured with VLAN ID restriction option “Allow specified” may not be able to communicate using the vNIC.

Workaround:

vNIC for an IBM i logical partition should not be configured with VLAN ID restriction option “Allow specified”.

Resolution:

V7R1 – Resolution is not required as OS generated VLAN tags are not supported in V7R1

V7R2 – Apply PTF MF62676

V7R3 – Apply PTF MF62703

 

  • vNIC PVID issue

Description:

An issue was discovered when a vNIC is configured with a non-zero Port VLAN ID (PVID) which may result in loss of connectivity.

Two symptoms may be experienced due to this issue. 

When an SR-IOV logical port with the Promiscuous option enabled is sharing a physical port with a vNIC configured with a non-zero PVID, VLAN tagged traffic for the promiscuous logical port may inadvertently be dropped if the VLAN ID is the same as a non-zero PVID.

An vNIC configured with a non-zero PVID may lose connectivity when another logical port on the same physical port is varied on/off, configured/unconfigured, goes through hardware level recovery, or when a partition is powered up/down.

Workaround:

Instead of specifying a non-zero PVID at the Hardware-Management Console (HMC), enable OS level VLAN tagging support.

For example:

AIX:  mkdev –c adapter –s vlan –t eth –a base_adapter=’ent1’ –a vlan_tag_id=’300’

IBM i:  on ADDTCPIFC, under LIND, add a "Virtual LAN identifier" other than *NONE.  (Requires 7.2 or later.)

Linux:  vconfig add <device> <vlanId>

In addition, for Linux the vNIC must not be configured with the same physical port as an SR-IOV logical port configured with the Promiscuous option enabled.

Resolution:

System firmware service packs with new adapter firmware (10.2.252.1922 or later):

FW840.40 or later

FW860.10 or later

Statistics

0 Favorited
265 Views
0 Files
0 Shares
0 Downloads