AIX

 View Only

NVMe over Fiber Channel (NVMeoF) on Power

By Md Tanweer Alam posted Tue December 05, 2023 05:07 AM

  

NVMe over Fiber Channel (NVMeoF) on Power Systems

Introduction: In the ever-evolving landscape of data storage and retrieval, NVMe (Non-Volatile Memory Express) technology has emerged as a game-changer. By leveraging the high-speed capabilities of Fiber Channel adapters, NVMe over Fiber Channel (NVMeoF) offers a significant boost in performance and efficiency. IBM's robust operating system AIX on Power Systems, supports NVMe over Fiber Channel adapters, providing a powerful storage solution for enterprise environments. In this blog post, we will delve into the world of NVMeoF adapters on AIX, exploring their benefits, configuration, and considerations for optimal implementation.

I. Understanding NVMe over Fiber Channel (NVMeoF) Adapters

NVMeoF adapters combine the advantages of NVMe storage technology and the high-performance Fiber Channel protocol. It leverages the low-latency and high-bandwidth characteristics of Fiber Channel to deliver exceptional storage performance.  This synergy enables low-latency and high-bandwidth communication between AIX systems and NVMe storage devices, revolutionizing storage solutions in terms of speed and scalability.

II. Benefits of NVMeoF Adapters on AIX

  1. Enhanced Performance: NVMeoF leverages the NVMe protocol's optimized command set and the high-bandwidth capabilities of Fiber Channel, resulting in significantly improved I/O performance compared to traditional storage interfaces.
  2. Reduced Latency: NVMeoF minimizes latency by eliminating the overhead associated with legacy storage protocols, allowing for faster data access and reduced processing time.
  3. Scalability and Flexibility: NVMeoF adapters provide the ability to scale storage systems seamlessly, accommodating the growing demands of modern data-intensive workloads. Additionally, they support multi-pathing, enabling redundant and highly available storage configurations.
  4. Improved Efficiency: NVMe over Fiber Channel optimizes CPU utilization by offloading storage-related tasks to the adapter, freeing up system resources and enabling applications to run more efficiently.
  5. Compatibility: NVMe over Fiber Channel adapters are backward compatible with existing Fiber Channel infrastructure, allowing organizations to leverage their investment in Fiber Channel while gaining the benefits of NVMe storage technology.
  6. Parallel I/O: FC-NVMe supports multiple queues and commands, which allows for more efficient data transfer and can further improve performance.
  7. Support for NVMe-specific features: FC-NVMe supports all of the latest NVMe features, such as host memory buffer and atomic submission and completion commands.
  8. FC-NVMe offers several advantages over traditional storage solutions. Here are a few comparisons of FC-NVMe with other storage technologies:

              

Feature

FC-NVMe

iSCSI

NVMe/TCP

SAS

SATA

Performance

High

Medium

Medium

Medium

Low

Latency

Low

Medium

Medium

Medium

High

I/O

Parallel

Serial

Serial

Serial

Serial

Scalability

High

Medium

Medium

Medium

Low

Power consumption

Low

Medium

Medium

Medium

High

Reliability

High

Medium

Medium

Medium

Low

Cost

High

Medium

Low

Medium

Low

Use cases

High-performance computing (HPC), artificial intelligence (AI), databases

General-purpose storage, virtual machines (VMs)

General-purpose storage, VMs

Enterprise storage

Consumer storage

III. Configuring NVMeoF protocol on AIX: 

To configure the NVMeoF protocol in AIX on power, follow the below steps.

  1. Requirements: Ensure that your AIX system supports NVMeoF adapters and has the necessary hardware infrastructure, including compatible Fiber Channel switches, NVMeoF-supported FC adapters, and a Storage Area Network.

    ·              Power System: P10 or above

    ·              Firmware: FW 1030 or above

    ·              FC Adapter: 32Gb adapter cards NVMeoF supporting FC adapters.[say ex: Feature code - 5787, EL5U, EL5V, EN1A, EN1B]

    ·              VIOS 3.1.4.0 or above

    ·              AIX 7.3.1 or above

    ·              NVMe Protocol driver available

    ·              HMC >= 1030

  2. Install and Verify Adapter: Physically install the NVMeoF adapter in your AIX system. Boot the system and use diagnostic tools like cfgmgr and lspci to verify the adapter's presence and proper functioning.
  3. Update AIX Device Drivers: Update AIX device drivers to ensure compatibility and optimal performance for NVMeoF adapters. Use tools like "sddpcm_update" or refer to the AIX documentation for detailed instructions on driver updates.
  4. Configure Fiber Channel (FC) Connections: Configure the Fiber Channel connections between the NVMeoF adapter and the Fiber Channel switch. This involves assigning Worldwide Names (WWNs) to the adapter and zoning the switch to allow communication between the AIX system and the NVMe storage devices.
  5. Discovery and Configuration: Use AIX commands such as "cfgdev" and "cfgmgr" to discover and configure the NVMe storage devices connected via the NVMeoF adapter. This process ensures that AIX recognizes and properly interacts with the NVMe storage devices.
  6. Test and Verify: Conduct thorough testing to ensure the NVMeoF adapter is functioning correctly. Use tools like "lspv" and "lspath" to verify the presence and paths to NVMeoF storage devices.

    Command to List/Configure the NVMe over FC to any Lpar:

    • To list active FC adapters.

      $ lsnports

      name             physloc                        fabric tports aports swwpns awwpns

      fcs0             U78D8.ND1.FGD007L-P0-C2-C0-T0       1    255    255   2048    2044

      fcs1             U78D8.ND1.FGD007L-P0-C2-C0-T1       1    255    254   2048    2042

      fcs2             U78D8.ND1.FGD007L-P0-C2-C0-T2       1    255    255   2048    2044

      fcs3             U78D8.ND1.FGD007L-P0-C2-C0-T3       1    255    255   2048    2044

    • To list about FC-NVMe support protocol device.

      $ lsdev -dev fcs0 -child

      name             status      description

      fcnvme0          Available   FC-NVMe Protocol Device

      fscsi0           Available   FC SCSI I/O Controller Protocol Device

    • To list about the auto-config state, host_nqn.

      # lsattr -El fcnvme0

      attach     switch                              How this adapter is connected  False

      autoconfig available                           Configuration State            True

      host_nqn   nqn.2014-08.org.nvmexpress:uuid:339fb730-d6ee-4709-8a0a-12efdbcd1711 Host NQN (NVMe Qualified Name) True

    • To list SCSI/NVME ports, status, Queues, and flags.

      $ lsmap -all -npiv -proto

      Name          Physloc                            ClntID ClntName       ClntOS

      ------------- ---------------------------------- ------ -------------- -------

      vfchost0      U9080.HEX.1358D28-V18-C3              190                

       

      Status:NOT_CONNECTED

      FC name:                        FC loc code:

      Flags:0x281<NOT_MAPPED,NOT_CONNECTED>

      VFC client name:                VFC client DRC:

       

      SCSI Ports:0    SCSI Queues:0    SCSI Status:NOT_LOGGED_IN 

      SCSI Flags:0x0<>

      NVME Ports:0    NVME Queues:0    NVME Status:NOT_LOGGED_IN 

      NVME Flags:0x0<>

    • To know about the states of FC-NVMe and fscsi protocol devices.

      # lsdev -p fcs*

      fcnvme0 Available 00-00-02 FC-NVMe Protocol Device

      fcnvme1 Available 00-01-02 FC-NVMe Protocol Device

      fcnvme2 Available 00-02-02 FC-NVMe Protocol Device

      fcnvme3 Available 00-03-02 FC-NVMe Protocol Device

      fscsi0  Available 00-00-01 FC SCSI I/O Controller Protocol Device

      fscsi1  Available 00-01-01 FC SCSI I/O Controller Protocol Device

      fscsi2  Available 00-02-01 FC SCSI I/O Controller Protocol Device

    • To know about NVMe Discovery/Dynamic Controller.

      # lsdev -p fcnvme*

      nvme0 Available 00-00-02 NVMe Discovery Controller

      nvme1 Available 00-01-02 NVMe Discovery Controller

      nvme2 Available 00-02-02 NVMe Discovery Controller

      nvme3 Available 00-03-02 NVMe Discovery Controller

      nvme4 Available 00-00-02 NVMe Dynamic Controller

      nvme5 Available 00-01-02 NVMe Dynamic Controller

      nvme6 Available 00-02-02 NVMe Dynamic Controller

      nvme7 Available 00-03-02 NVMe Dynamic Controller

    • To know about Available NVMe Disk.

      # lsdev -p nvme*

      hdisk45 Available 00-00-02 EMC PowerMax NVMe Disk

      hdisk46 Available 00-00-02 EMC PowerMax NVMe Disk

    • To list the NVMe Disk to Lpar.

      $ vfcctrl -list -protocol

      Adapter         disabled_by_lpm  disabled_by_user ClntId   ClntName       

      --------------- ---------------- ---------------- -------- ----------------

      vfchost0        none             NVMe             190      --             

      vfchost1        none             NVMe             189      --             

      vfchost2        none             NVMe             5        densnpiv03     

      vfchost3        none             NVMe             187      densnpiv19     

      vfchost4        none             NVMe             164      densnpiv23 

    •  To enable the  NVMe Disk for NPIV client/LPAR.

      $ ioscli vfcctrl -enable -protocol nvme -cpid 164

      The "nvme" protocol for "vfchost4" is enabled.

       

      $ ioscli vfcctrl -enable -protocol nvme -cpid 187

      The "nvme" protocol for "vfchost3" is enabled.

       

      $ ioscli vfcctrl -enable -protocol nvme -cpid 5

      The "nvme" protocol for "vfchost2" is enabled.

    •  To validate NVMe enabled on the client, [the disabled_by_user column should display none; if NVMe means – Protocol is disabled and need to enable]

      $ vfcctrl -list -protocol

      Adapter         disabled_by_lpm  disabled_by_user ClntId   ClntName       

      --------------- ---------------- ---------------- -------- ----------------

      vfchost0        none                            NVMe             190      --             

      vfchost1        none                            NVMe             189      --             

      vfchost2        none                            none                  5        densnpiv03     

      vfchost3        none                            none                 187     densnpiv19     

      vfchost4        none                             none                 164     densnpiv23     

    IV. Considerations for NVMeoF Adapter Implementation in AIX over other storage:

    1. Compatibility: Ensure compatibility between the NVMeoF adapter, AIX version, and NVMe storage devices. Consult vendor documentation and compatibility matrices to verify compatibility before implementation.
    2. Performance Monitoring: Monitor performance metrics such as I/O latency, bandwidth utilization, and queue depths to optimize and fine-tune the NVMeoF adapter configuration for optimal performance.
    3. Multipathing and Redundancy: Implement multipathing to ensure redundancy and high availability. Configure multiple paths to the NVMe storage devices using redundant Fiber Channel switches and appropriate configuration settings.
    4. Firmware and Driver Updates: Regularly update NVMeoF adapter firmware and AIX device drivers to benefit from performance enhancements, bug fixes, and compatibility improvements provided by the vendors.

    Conclusion: NVMe over Fiber Channel (NVMeoF) adapters in AIX open up new possibilities for high-performance, low-latency storage solutions. By combining NVMe technology with Fiber Channel's robustness, AIX systems can achieve unprecedented levels of I/O performance and scalability. When properly configured and implemented, NVMeoF adapters empower businesses to meet the demands of modern data-driven applications while ensuring future-proof storage solutions.

    References:

    https://www.ibm.com/docs/en/storage-scale/5.1.7?topic=events-nvmeof

    https://www.ibm.com/docs/en/power10?topic=vfc-npiv-fibre-channel-nvme-over-fabrics-protocol- pport#p10hb1_npiv_fibre_channel_protocol__title__4

    Contributors:

    @SIVAPRAKASAM SIVASUBRAMANIAN

    0 comments
    54 views

    Permalink