PowerVM

SR-IOV FAQs 

Fri June 19, 2020 02:08 PM

PowerVM® SR-IOV FAQs

In 2015 PowerVM introduced support for Single-Root I/O Virtualization (SR-IOV) on POWER8® systems. Given the strong interest in this technology, we compiled answers to many frequently asked questions below.  In 2018 PowerVM enhanced the SR-IOV support for POWER9™ systems.

For information on vNIC please see the vNIC Frequently Asked Questions.

 

SR-IOV FAQs list

 

SR-IOV FAQs



What is PowerVM SR-IOV technology?

PowerVM Single Root I/O Virtualization (SR-IOV) technology allows multiple partitions to share PCIe® devices.  The device must support the Single Root I/O Virtualization Specification, which is an extension to the PCI Express® specification and defines how multiple operating systems (partitions) share a PCIe device. PowerVM SR-IOV technology is analogous to Integrated Virtual Ethernet (IVE) with its Host Ethernet Adapter (HEA) provided on some POWER6 and POWER7 base systems.  See How do the network virtualization technologies for Ethernet adapters compare? for comparisons of PowerVM network virtualization technologies.



What’s the difference between an SR-IOV logical port and an SR-IOV virtual function (VF)?

An SR-IOV logical port is an I/O device created for a partition or a partition profile using the management console (HMC) when a user intends for the partition to access an SR-IOV adapter virtual function.  An SR-IOV virtual function is a PCIe function defined by the SR-IOV specification.  When a partition with SR-IOV logical ports is activated or when an SR-IOV logical port is dynamically added to a partition, the hypervisor will allocate and configure an adapter virtual function and map it to the partition SR-IOV logical port.



What POWER8 system I/O adapters support SR-IOV shared mode?

SR-IOV Capable Network I/O Adapters

Low profile - multi OS FC

Full high - multi OS FC

Low profile - Linux only FC

Full high - Linux only FC

PCIe2 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ454

EN0J 1

EN0H 3

EL38

EL56

PCIe2 4-port (2x10GbE+2x1GbE) copper twinax and RJ454

EN0L 1

EN0K 3

EL3C

EL57

PCIe2 4-port (2x10GbE+2x1GbE) LR Optical fiber and RJ454

EN0N

EN0M

n/a

n/a

PCIe3 4-port 10GbE SR optical fiber

EN16 2

EN15

n/a

n/a

PCIe3 4-port 10GbE copper twinax

EN18 2

EN17

n/a

n/a

Notes:

  1. SR-IOV announced February 2015 for Power E870/E880 system node. Now available in other POWER8 servers.
  2. Adapter is only available in Power E870/E880 system node, not 2U server.
  3. SR-IOV announced April 2014 for Power 770/780/ESE system node. With April 2015 announce, available in POWER8 servers.
  4. Withdrawn

 



What POWER9 system I/O adapters support SR-IOV shared mode?

 

SR-IOV Capable Network I/O Adapters

FCs

Server & Attached EMX0  I/O Expansion Drawer  Adapters

S922, H922 (EMXO)

S914, S924, H924 (EMXO)

L922 (ELMX)

E950 (EMXO)

E980 (EMXO)

PCIe3 4-port (2x10GbE+2x1GbE) LR Optical fiber and RJ454

EN0N, EN0M

     

EN0M (EN0M)

EN0N (EN0M)

PCIe3 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ454

EN0J, EN0H, EL38, EL56

EN0J (EN0H)

EN0H (EN0H)

EL38 (EL56)

EN0H (EN0H)

EN0J (EN0H)

PCIe3 4-port (2x10GbE+2x1GbE) copper twinax and RJ454

EN0L, EN0K, EL3C, EL57

EN0L (EN0K)

EN0K (EN0K)

EL3C (EL57)

EN0K (EN0K)

EN0L (EN0K)

PCIe3 4-port 10GbE SR optical fiber

EN16, EN15

(EN15)

EN15 (EN15)

(EN15)

EN15 (EN15)

EN16 (EN15)

PCIe3 4-port 10GbE copper twinax

EN18, EN17

     

EN17 (EN17)

EN18 (EN17)

PCIe3 LP 2-Port 10GbE NIC & RoCE SR/Cu Adapter1

EC2R, EC2S

EC2R (EC2S)

EC2S (EC2S)

EC2R (EC2S)

EC2S (EC2S)

EC2R (EC2S)

PCIe3 LP 2-Port 25/10GbE NIC & RoCE SR/Cu Adapter 1

EC2T, EC2U

EC2T (EC2U)

EC2U (EC2U)

EC2T (EC2U)

EC2U (EC2U)

EC2T (EC2U)

PCIe3 LP 2-port 100/40GbE NIC & RoCE QSFP28 Adapter x16 1,4

EC3L, EC3M

EC3L

EC3M

EC3L

EC3M3

EC3L3

PCIe4 LP 2-port 100/40GbE NIC & RoCE QSFP28 Adapter x16 1

EC67,EC66

EC673

EC663

EC673

EC663

EC673

Notes:

    1. SR-IOV support for NIC function prior to FW930. SR-IOV support for both NIC and RoCE with FW930
    2. Low profile version not supported in 2U Scale-out servers
    3. SR-IOV support with FW930
    4. Withdrawn



What is the maximum number of SR-IOV capable adapters supported in SR-IOV shared mode per system?

The maximum number of SR-IOV shared mode enabled adapters per system is 32.  If the system supports less than 32 SR-IOV capable PCIe slots then the maximum is the number of SR-IOV capable PCIe slots.



How many logical ports/VFs are supported per adapter?

SR-IOV Capable Network I/O Adapters

Feature codes

Physical port link speed

# of logical ports per physical port

# of logical ports per adapter

PCIe2 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ45

EN0J, EN0H, EL38, EL56

1Gb

4

48

10Gb

20

PCIe2 4-port (2x10GbE+2x1GbE) copper twinax and RJ45

EN0L, EN0K, EL3C, EL57

1Gb

4

48

10Gb

20

PCIe2 4-port (2x10GbE+2x1GbE) LR Optical fiber and RJ45

EN0N, EN0M

1Gb

4

48

10Gb 20

PCIe3 4-port 10GbE SR optical fiber

EN16, EN15

10Gb

16

64

PCIe3 4-port 10GbE copper twinax

EN18, EN17

10Gb

16

64

PCIe3 LP 2-Port 10Gb NIC&ROCE SR/Cu Adapter EC2R, EC2S 10Gb 40 80
PCIe3 LP 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter EC2T, EC2U 25/10Gb 40 80
PCIe3 LP 2-port 100GbE NIC&RoCE QSFP28 Adapter x16 EC3L, EC3M 40/100Gb 60 120
PCIe4 LP 2-port 100/40GbE NIC & RoCE QSFP28 Adapter x16 EC67, EC66 40/100Gb 60 120

 



What are the requirements to enable SR-IOV shared mode?

POWER7 Systems

  • IBM Power 770 (9117-MMD), IBM Power 780 (9179-MHD), or Power ESE (8412-EAD) Power Systems servers (POWER7+)
    • PCIe gen2 slots (i.e. no I/O drawer slots)
  • HMC V7R7.9.0
    • HMC required to support SR-IOV capable system
  • Server firmware FW780.10 (AM780_056)
  • PowerVM standard or enterprise edition
    • PowerVM express edition allows only one partition to use the SR-IOV logical ports per adapter
  • Minimum client operating systems:
    • AIX 6.1 TL9 SP2
    • AIX 7.1 TL3 SP2
    • SLES Linux Enterprise Server 11 SP3
    • Red Hat Enterprise Linux 6.5
    • IBM i 7.1 TR8
    • IBM i 7.2
  • SR-IOV logical ports assigned to the VIOS requires VIOS 2.2.3.2 or later

POWER8 Power Systems starting March 2015

  • IBM Power E870 (9119-MME), IBM Power E880 (9119-MHE)
  • PCIe gen3 slots (no I/O drawer slots)
  • HMC required for SR-IOV
    • HMC required to support SR-IOV capable system
  • Server firmware 820.10 SP
  • PowerVM standard or enterprise edition
  • Minimum client operating systems:
    • AIX 6.1 TL9 SP4 and APAR IV63331, or later
    • AIX 7.1 TL3 SP4 and APAR IV63332, or later
    • SLES Linux Enterprise Server 11 SP3, or later
    • Red Hat Enterprise Linux 6.5 or 7, or later
    • IBM i 7.1 TR9, or later
    • IBM i 7.2 TR1, or later
  • SR-IOV logical ports assigned to the VIOS requires VIOS 2.2.3.4 with interim fix IV63331, or later

POWER8 Power Systems starting June 2015

  • IBM Power System E870 (9119-MME), IBM Power System E880 (9119-MHE),
    IBM Power System E850 (8408-E8E),
    IBM Power System S824 (8286-42A), IBM Power System S814(8286-41A),
    IBM Power System S822(8284-22A), IBM Power System S824L(8247-42L),
    IBM Power System S822L (8247-22L), IBM Power System S812L(8247-21L)
  • SR-IOV support also available for the PCIe Gen3 I/O expansion drawer
    (2 SR-IOV slots per Fan-out Module )
  • HMC required for SR-IOV
  • Server firmware 830
  • PowerVM standard or enterprise edition
    • PowerVM express edition allows only one partition to use the SR-IOV logical ports per adapter
  • Minimum client operating systems:
    • AIX 6.1 TL9 SP5 and APAR IV68443, or later
    • AIX 7.1 TL3 SP5 and APAR IV68444, or later
    • IBM i 7.1 TR10, or later
    • IBM i 7.2 TR2, or later
    • Red Hat Enterprise Linux 6.5, or later
    • Red Hat Enterprise Linux 7, or later
    • SUSE Linux Enterprise Server 11 SP3, or later
    • SUSE Linux Enterprise Server 12, or later
    • Ubuntu 15.04, or later
    • SR-IOV logical ports assigned to the VIOS requires VIOS 2.2.3.51, or later

POWER9 Power Systems Scale-out Servers

  • IBM Power System S922(9009-22A), IBM Power System S914 (9009-41A)
    IBM Power System S924 (9009-42A), IBM Power System H922 (9223-22H),
    IBM Power System H924 (9223-42H), IBM Power System L922 (9008-22L)
     
  • SR-IOV adapters also supported in the PCIe Gen3 I/O expansion drawer
  • System Firmware FW910.00 or later
  • HMC Version / Release: 9.1.910.0 or later
  • Minimum Client Operating systems
    • IBM i 7.2 TR8 and 7.3 TR4 or later
    • AIX Version 7.2 with the 7200-02  Technology Level and Service Pack  7200-02-02-1810 or later
    • AIX Version 7.1 with the 7100-05 Technology Level  and Service Pack 7100-05-02-1810 or later
    • AIX Version 6.1 with the 6100-09 Technology Level and Service Pack 6100-09-11-1810  or later (AIX 6.1 service extension required)
    • SR-IOV logical ports assigned to the VIOS requires VIOS 2..2.6.21, or later

POWER9 Power Systems E950 & E980 (Nov. 2018 GA)

  • IBM Power System E980 (9080-M9S) 1-4 Node, IBM Power System E950(9040-MR9)
  • SR-IOV adapter also supported in the PCIe Gen3 I/O expansion drawer
  • System Firmware FW920.20
  • HMC version/release: V9 R1.921.0 or later
  • VIOS
    • VIOS 2.2.6.31 or later
    • VIOS 3.1 or later (planned availability 11/9/2018)
  • AIX

    •  AIX Version 7.2 with the 7200-03 Technology Level or later
    • AIX Version 7.1 with the 7100-05 Technology Level and Service Pack  7100-05-03-1838 or later
    • AIX Version 6.1 with the 6100-09 Technology Level and Service Pack 6100-09-12-1838   or later (AIX 6.1 service extension required)
    • AIX Version 7.2 with the 7200-01 Technology Level and Service Pack 7200-01-05-1845 or later (planned availability 1/31/2019)
    • AIX Version 7.2 with the 7200-02 Technology Level and Service Pack  7200-02-03-1845 or later (planned availability 1/31/2019)
    • AIX Version 7.1 with the 7100-04 Technology Level and Service Pack 7100-04-07-1845  or later (planned availability 1/31/2019)  
  • IBM i (E980 only)

    • IBM i 7.3 TR5 - All supported SR-IOV capable adapters
    • IBM i 7.2 TR9  - FCs EN0H, EN0J, EN0K, EN0L, EN15, EN16, EN17, EN18
  • Linux

    • Red Hat Enterprise Linux 7.5 for Power LE (p8compat), or later with Mellanox OFED1
    • Red Hat Enterprise Linux for SAP with Red Hat Enterprise Linux 7 for Power LE version 7.5, or later with Mellanox OFED1
    • SUSE Linux Enterprise Server 12 Service Pack 3, or late with Mellanox OFED1
    • SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 12 Service Pack 3, or later with Mellanox OFED1
    • SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 11 Service Pack 4, or later with Mellanox OFED1
    • SUSE Linux Enterprise Server 15, or later with Mellanox OFED1

Note:

  1. See http://www.mellanox.com/page/firmware_table_IBM_SystemP

Power9 Power Systems SR-IOV Support for FCs EC66/EC67 and RoCE

  • IBM Power System E980 (9080-M9S), IBM Power System E950 (9040-MR9), POWER9 Scale-out servers
  • For RoCE support an SR-IOV RoCE capable adapter (FCs EC2R, EC2S, EC2T, EC2U, EC3L, EC3M, EC66, EC67)
  • System Firmware FW930
  • HMC version/release: V9 R1.930.0 or later
  • VIOS
    • VIOS 2.2.6.41 or later
    • VIOS 3.1.0.21 or later
  • AIX
    • AIX Version 7.2 with the 7200-03 Technology Level and Service Pack 7200-03-03-1914 or later
    • AIX Version 7.1 with the 7100-05 Technology Level and Service Pack 7100-05-04-1914 or later
    • AIX Version 7.2 with the 7200-01 Technology Level and Service Pack  7200-01-06-1914 or later (planned availability August 30, 2019)
    • AIX Version 7.2 with the 7200-02 Technology Level and Service Pack  7200-02-04-1914 or later (planned availability August 30,  2019)
    • AIX Version 7.1 with the 7100-04 Technology Level and Service Pack  7100-04-08-1914  or later (planned availability August 30, 2019)
  • IBM i
    • IBM i 7.3 TR6 and IBM I 7.4  or later for feature code EC66 & EC67 SR-IOV support
    • IBM i 7.2 or later for feature code EC66 & EC67 vNIC support
    • IBM i 7.4 or later for RoCE support
  • Linux
    • Red Hat Enterprise Linux 7.6 or later with Mellanox OFED1 4.5-2.2.0.1 or later (vNIC support is technology preview)
    • Red Hat Enterprise Linux 8.0 or later with Mellanox OFED1 4.5-2.2.0.1 or later (vNIC support is technology preview)
    • SUSE Linux Enterprise Server 12 Service Pack 4 or later with Mellanox OFED1 4.5-2.2.0.1 or later
    • SUSE Linux Enterprise Server 15 Service Pack 1 or later with Mellanox OFED1 4.5-2.2.0.1 or later

Note:

  1. See http://www.mellanox.com/page/firmware_table_IBM_SystemP

 

Hypervisor Memory Requirements

  • The following table provides estimates for memory required by the Power Hypervisor to configure an adapter in SR-IOV shared mode.  Actual memory usage may vary by configuration.

    Adapters FCs

    Hypervisor  memory per adapter

    EN0H, EN0J, EN0K, EN0L, EN0M, EN0N, EN15, EN16, EN17, EN18

    160MB

    EC3L, EC3M,

    3.7GB

    EC2T, EC2U, EC2R, EC2S

    2.9GB



Are all Power Systems PCIe slots SR-IOV capable?  If not, which slots are SR-IOV capable?

For IBM Power 770 (9117-MMD), IBM Power 780 (9179-MHD), or Power ESE (8412-EAD) Power Systems servers all PCIe slot within the system units are SR-IOV capable.  PCIe slots in the I/O expansion drawers are not SR-IOV capable.

For POWER8 Systems consult IBM Knowledge Center for the specific systems of interest.  In some cases total system memory may determine if a PCIe slots is SR-IOV capable.

 

POWER8 Systems or I/O Expansion Drawer

IBM Knowledge Center PCIe adapter placement rules

8247-21L, 8247-22L, or 8284-22A

http://www-01.ibm.com/support/knowledgecenter/8247-21L/p8eab/p8eab_83x_8rx_slot_details.htm

8247-42L

https://www-01.ibm.com/support/knowledgecenter/8247-42L/p8eab/p8eab_8247_slot_details.htm

8286-41A and 8286-42A

https://www-01.ibm.com/support/knowledgecenter/8286-41A/p8eab/p8eab_82x_84x_slot_details.htm

8408-E8E

https://www-01.ibm.com/support/knowledgecenter/8408-E8E/p8eab/p8eab_85x_slot_details.htm?cp=8408-E8E%2F0-2-7-2-0

9119-MHE or 9119-MME

https://www-01.ibm.com/support/knowledgecenter/9119-MME/p8eab/p8eab_87x_88x_slot_details.htm

PCIe Gen3 I/O expansion drawer

https://www-01.ibm.com/support/knowledgecenter/9119-MHE/p8eab/p8eab_emx0_slot_details.htm

 

For POWER9 Systems consult IBM Knowledge Center for specific systems of interest.

 

POWER9 Systems or I/O Expansion Drawer

IBM Knowledge Center PCIe adapter placement rules

9008-22L, 9009-22A, or 9223-22H

https://www.ibm.com/support/knowledgecenter/en/9009-22A/p9eab/p9eab_922_slot_details.htm

9040-MR9

https://www.ibm.com/support/knowledgecenter/en/POWER9/p9eab/p9eab_950_slot_details.htm

9080-M9S

https://www.ibm.com/support/knowledgecenter/en/POWER9/p9eab/p9eab_980_slot_details.htm

9009-41A, 9009-42A, and 9223-42H1

https://www.ibm.com/support/knowledgecenter/en/9223-42H/p9eab/p9eab_914_924_slot_details.htm

EMX0 PCIe Gen3 I/O expansion drawer

https://www.ibm.com/support/knowledgecenter/en/9223-42H/p9eab/p9eab_emx0_slot_details.htm

Notes:

  1. Of the three adapters in SR-IOV shared mode under a PCIe switch, a maximum of two adapters can be either FC EC2S or EC2U.



Does SR-IOV require VIOS?

No, a VIOS partition is not required to share an SR-IOV adapter enabled in SR-IOV shared mode.



Can SEA use an SR-IOV logical port as its physical network device?

Yes, but the logical port must be configured with Promiscuous mode enabled.  Promiscuous mode is enable by selecting the Promiscuous mode check box on the management console when the logical port is created.  Promiscuous mode can be enabled on one logical port per physical port.  This means only one logical port per physical port can be an SEA physical device.



How do I place an SR-IOV adapter in "Shared" mode?

To place an SR-IOV adapter in shared mode, you follow the following steps:
1. Click on the CEC properties from the HMC
2. Click on the I/O tab
3. Click on the adapter slot
4. Click on the SR-IOV tab
5. Click Enable SR-IOV Shared Mode

6. Click "ok"



How do I know the adapter is in SR-IOV "Shared" mode from the HMC?

1. Click on the CEC properties from the HMC
2. Click on the I/O tab
3. Click on the adapter slot
4. Click on the SR-IOV tab
5. The "Shared Mode" checkbox will be checked.
6. Physical ports will be shown and the logical ports can be seen by clicking the physical ports.

 



How do I know if my AIX device is an SR-IOV logical port/VF?

On AIX, adapters with "VF" in their name such as the following will be in SR-IOV shared mode:
> lsdev -Cc adapter|grep VF
ent0   Available 00-00 PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)
ent4   Available 01-00 PCIe2 10GbE SFP+ SR 4-port Converged Network Adapter VF (df1028e214100f04)
ent5   Available 02-00 PCIe2 10GbE SFP+ SR 4-port Converged Network Adapter VF (df1028e214100f04)
ent14  Available 05-00 PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)

Additionally, output for "netstat -v" will show "VIRTUAL_PORT" in the "Driver Flags"



Does PowerVM provide SR-IOV support for adapter types other than Ethernet adapters?

At the current time the Ethernet adapter listed under What I/O adapters support SR-IOV shared mode? are the only adapters supported in SR-IOV shared mode.



Is Fibre Channel over Ethernet (FCoE) supported in SR-IOV shared mode?

Not at this time.



Are VLANs supported?

Yes, operating systems with VLAN support may be configured to use VLANs.  In addition, a Port VLAN ID (PVID) may be configured for an SR-IOV logical port to provide VLAN tagging and untagging.

VLAN restrictions may be configured for an SR-IOV logical port to limit what VLANs an operating system can use.



Are advanced virtualizations functions such as live partition mobility (LPM) supported when an SR-IOV logical port is configured for a partition?

No, when a partition is configured with an SR-IOV logical port the partition is not a candidate for advanced virtualization functions (e.g. LPM).  However, if a partition is configured with a vNIC virtual adapter it may be a candidate for advanced virtualization functions.



Is the adapter required to be in SR-IOV shared mode prior to configuring a logical port?

Yes, if an adapter is not in SR-IOV shared mode it will not be listed as an option when creating an SR-IOV logical port.



Is link aggregation supported?

IEEE802.3ad/802.1ax (LACP) is supported if only one logical port is configured for the physical port.  The logical port should be configured with a capacity value of 100% to prevent configuration of more than one logical port for the physical port.

Active-backup link aggregation technologies such as AIX Network Interface Backup (NIB), Linux bonding active-backup mode, or IBM I VIPA may be used to provide network failover capability and sharing of the physical port.  To ensure detection of logical link failures a network address to ping should be configured to monitor the link.

For Linux active-backup mode, the fail_over_mac value should be set to "active" (1) or "follow" (2).

Static aggregation or etherchannel configurations are not supported.

See the IBM Power Systems SR-IOV: Technical Overview and Introduction Redpaper for more information on link aggregation.



Is there a way to monitor SR-IOV network activity?

Yes, there are a number of ways to monitor SR-IOV network activity, here are a few options.

  • The hardware management console Performance and Capability Monitor (PCM) exposes SR-IOV adapter utilization with per Virtual Function & Partition breakdown.  PCM provides both GUI and REST APIs support.
  • The hardware management console exposes SR-IOV adapter physical port statistics and SR-IOV logical port statistics.  GUI, CLI, and REST APIs are provided.
  • Partition network performance/monitoring tools may also provide logical port performance statistics.  SR-IOV logical ports appear as physical devices to a partition, therefore traditional network performance tools typically include SR-IOV logical port statistics.  The specific tools are operating system dependent.



Can users dynamically change a logical port’s capacity value?

No, not while the logical port is configured for an active partition.  To change a logical port capacity value a user may change it in the partition profile and activate the changed profile or the user may dynamically remove the logical port from a partition and then dynamically add a logical port with a different capacity value to the partition.



Are there any limitations to consider when configuring the SR-IOV logical port OS MAC Address Restrictions or VLAN-ID Restrictions?

Yes, depending on the adapter feature code there may be limitations.

  • For adapters with feature codes EN0J, EN0H, EL38, EL56, EN0L, EN0K, EL3C, EL57, EN0N, EN0M, EN15, EN16, EN17, EN18 the follow limitations apply.
    • If either the OS MAC Address Restriction or the VLAN ID Restriction is set to an option other than Allow All then the other restriction must also be set to an option other than Allow All.  

      For example, if the VLAN ID Restriction option is set to Allow List, then the OS MAC Address Restriction option must be set to either Deny All or Allow List.
       
    • If the VLAN ID Restriction option is set to Allow List, then for each VLAN ID in the list the operating system intends to have active on the interface at the same time there must be an adapter VLAN filter available. The number of VLAN filters available to a logical port depends on the logical port capacity value.  The following describes the algorithms used to allocate VLAN filters for logical ports.
      • The following rules apply to adapters with feature codes EN0H, EN0J, EN0K, EN0L, EN0M, EN0N, EL38, EL56, EL3C, EL57:
        • A capacity value of less than 6 percent will provide one VLAN filter (i.e. allow one active VLAN ID).
        • With every 6 percent increment one additional VLAN filter becomes available to the logical port.
        • A maximum of up to 17 VLAN filters are available per physical port.
      • The following rules apply to adapters with feature codes EN15, EN16, EN17, EN18:
        • A capacity <8 percent will provide one VLAN filter.
        • With every 8 percent increment, one additional VLAN filter becomes available.
        • A maximum of up to 13 VLAN filters are available per physical port.
  • For adapters with feature codes EC2R, EC2S, EC2T, EC2U, EC3L, EC3M:
    • There are no dependencies between the logical port OS MAC Address Restrictions and the VLAN ID Restrictions.  For example, the VLAN ID Restrictions can be set to Deny All while the OS MAC Address Restrictions may be set to Allow All.
    • These adapters do not have the same requirements for VLAN filters per logical port but there is a maximum total per physical port.



Are there any limitations to consider when configuring an SR-IOV logical port Port VLAN ID (PVID)?

Yes, depending on the adapter feature code there may be limitations.

  • For adapters with feature codes EC2R, EC2S, EC2T, EC2U, EC3L, EC3M the logical port VLAN ID Restrictions must be set to Deny All when a non-zero Port VLAN ID is configured.  This means operating system VLAN tagging is not supported (frames are dropped) if a non-zero Port VLAN ID is configured.  If VLAN tagging with more than one VLAN ID is required,  set the logical port Port VLAN ID to zero and configure the operating system to apply the VLAN tags.



How many MAC addresses are allowed in the logical port MAC restrictions “Allow Specified” MAC list?

Up to 4 MAC addresses are allowed in the MAC list.



How many VLAN IDs are allowed in the logical port VLAN restrictions “Allow Specified” VLAN list?

Up to 20 VLAN IDs are allowed in the VLAN list.



When configuring an SR-IOV physical port, is the Label and Sub-label required?

No, but specifying a label and sub-label makes the selection of a physical port simpler when creating an SR-IOV logical port or vNIC client virtual adapter.  The label may also be used to identify a target physical port during an LPM operation of a partition with a vNIC client virtual adapter.



What happens if the physical port MTU size is set to 1500 and a logical port attempt to transmit packets with a larger (MTU)?

The physical port will drop packets with a length larger than the physical port MTU.



How do the network virtualization technologies for Ethernet adapters compare?

Technology

Live Partition Mobility

Quality of service (QoS)

Direct access perf.

Link Aggregation

Requires VIOS

SR-IOV

No1

Yes

Yes

Yes2

No

vNIC

Yes

Yes

No3

Yes2

Yes

SEA/vEth

Yes

No

No

Yes

Yes

IVE/HEA

No

No

Yes

Yes

No

Notes:

  1. SR-IOV can optionally be combined with VIOS and virtual Ethernet to use higher-level virtualization functions like Live Partition Mobility (LPM); however, client partition will not receive the performance or QoS benefit.
  2. Some limitations apply.  See FAQ on link aggregation
  3. Generally better performance and requires fewer system resources when compared to SEA/virtual Ethernet



Where can I get additional information on SR-IOV?



Are there any known issues with SR-IOV?

  • AIX NIB configuration issue

Description:

An SR-IOV logical port in an AIX Network Interface Backup (NIB) etherchannel configuration might not be able to communicate with other SR-IOV logical ports or vNIC devices in other logical partitions on the same system.

AIX APARs: IV77944, IV80034, IV80127, IV82254, IV82479

Resolution:

Fix available, see Fix pack information for: NIB ETHERCHANNEL WITH SR-IOV VF PORT PROBLEM at http://www-01.ibm.com/support/docview.wss?uid=isg1fixinfo159817

 

  • IBM i SR-IOV logical port VLAN restrictions issue

Description:

An IBM i logical partition SR-IOV logical port configured with VLAN ID restriction option “Allow specified” may not be able to communicate using the logical port.

Workaround:

SR-IOV logical ports for IBM i logical partitions should not be configured with VLAN ID restriction option “Allow specified”.

Resolution:

V7R1 – Resolution is not required as OS generated VLAN tags are not supported in V7R1

V7R2 – Apply PTFs MF62338, MF62348, MF62349

V7R3 – Apply PTFs MF62340, MF62350, and MF62351

 

  • SR-IOV logical port PVID issue

Description:

An issue was discovered when an SR-IOV logical port is configured with a non-zero Port VLAN ID (PVID) which may result in loss of connectivity.

Two symptoms may be experienced due to this issue. 

When an SR-IOV logical port with the Promiscuous option enabled is sharing a physical port with another SR-IOV logical port configured with a non-zero PVID, VLAN tagged traffic for the promiscuous logical port may inadvertently be dropped if the VLAN ID is the same as a non-zero PVID.  

An SR-IOV logical port configured with a non-zero PVID may lose connectivity when another logical port on the same physical port is varied on or off, configured or un-configured, goes through hardware level recovery, or when a LPAR is powered down or up.

Workaround:

Instead of specifying a non-zero PVID at the Hardware-Management Console (HMC), enable OS level VLAN tagging support.

For example:

AIX:  mkdev –c adapter –s vlan –t eth –a base_adapter=’<dev>’ –a vlan_tag_id=’<vlandid>’

IBM i:  on ADDTCPIFC, under LIND, add a "Virtual LAN identifier" other than *NONE.  (Requires 7.2 or later.)

Linux:  vconfig add <device> <vlanId>

In addition, for Linux the SR-IOV logical port must not be configured with the same physical port as an SR-IOV logical port configured with the Promiscuous option enabled.

Resolution:

Move to system firmware service packs with new adapter firmware (10.2.252.1922 or later):

FW860.10 or later

FW840.40 or later

FW830.40 or later

 

  • Transmit hang issue

Description:

Under an extremely rare condition one or more SR-IOV logical ports on an adapter may experience a condition where the logical port is unable to transmit packets. 

The following are indications of this issue:

  • Generally, more than one logical port on the adapter experiences this condition.
  • For IBM i, varying the line description off and back on does not resolve the issue.
  • For AIX, multiple LNCENT_TX_ERR errors will be logged for the same logical port, and rmdev/mkdev of the logical port ent device does not resolve the issue.
  • When in SR-IOV shared mode, the adapter firmware level is 10.2.252.1905 or 10.2.252.1913.

Workaround:

To recover the adapter, from the Hardware-Management Console (HMC) request a resource dump of the adapter.  This will cause a reset of the adapter and may cause a 30 second to 2-minute network disruption where network traffic does not flow through the adapter.  On the Manage Dumps, Initiate Dump panel set the Resource selector to:

sriov <adapter location code> restart

Where the “<adapter location code>” is replaced with the location code of the adapter experiencing the error.

Resolution:

Move to a system firmware service pack with 10.2.252.1918 or later SR-IOV adapter firmware. The 10.2.252.1918 adapter firmware is or will be introduced in the following system firmware service packs:

FW840.20 or later

FW830.30 or later

FW820.50 or later

Statistics

0 Favorited
286 Views
0 Files
0 Shares
0 Downloads