Primary Storage

 View Only

IBM Technology Preview with NVMe-over-Fabrics

By Archive User posted Thu December 07, 2017 04:28 AM

  
At the AI Summit New York, December 5-6, IBM disclosed a technology preview and demonstration with the integration of IBM POWER9 Systems and IBM FlashSystem 900 using NVMe-over-Fabrics InfiniBand.  This combination of technologies is ideally suited to run cognitive solutions such as IBM PowerAI Vision, which can ingest massive amounts of data while simultaneously completing real time inferencing (object detection).  More details on IBM PowerAI Vision can be found here:  https://www.youtube.com/watch?v=nWft6tYVdrc

Enabling new technologies like PowerAI Vision is just the start.  Whether it is streams of data, transactional data, or batch processes, a consistent requirement is the lowest possible latency.  Among the leading all flash storage vendors, IBM with its FlashSystem 900, has stuck to its mission delivering low latency all flash arrays.  On the server side, IBM Power Systems has invested in PCI Gen 4 to reduce latency and increase bandwidth when compared to PCI Gen 3.

Along comes NVMe-oF, which is, at its core, about getting rid of latency.  How do you take an already low latency protocol, like InfiniBand or Fibre Channel, and make it faster?  Replace SCSI with NVMe and enable NVMe from server to fabric to storage array.  The FlashSystem 900 has been shipping with InfiniBand using SRP (SCSI over RDMA) for many years.  In the technology preview, the very same InfiniBand adapter, based on the Mellanox chipset, is instead used to support the OpenFabrics driver distribution and NVMe-oF InfiniBand.

A storage array that supports NVMe-oF is not enough, however, and this is where the POWER9 Systems come into play.  POWER9 is at the core of the AC922, the best server for Enterprise AI. Built from the ground up for accelerated workloads and AI, this IBM Power Systems server is capable of increasing I/O throughput up to 5.6x compared to the PCIe Gen3 buses used within x86 servers[1].  In the technology preview, a single POWER9 server with a single dual ported NVMe-oF EDR 100Gbit Mellanox adapter is attached to five FlashSystem 900 units via a Mellanox Switch-IB 2 7800 in order to deliver 41 GB/second of bandwidth; (23GB/second reads and 18GB/second write).

This technology preview outlines the new possibilities that will be available with the combination of IBM Storage and IBM Power Systems servers.  While both the FlashSystem 900 and POWER9 are announced, IBM has not formally announced support for NVMe-oF protocol on either the POWER9 or the FlashSystem 900. For more information about IBM Storage, please visit: http://www.ibm.com/storage.






[1] Results are based on IBM Internal Measurements running the CUDA H2D Bandwidth Test
Hardware: Power AC922; 32 cores (2 x 16c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU; Ubuntu 16.04. S822LC for HPC; 20 cores (2 x 10c chips), POWER8 with NVLink; 2.86 GHz, 512 GB memory, Tesla P100 GPU
Competitive HW: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) / 40 threads; Intel Xeon E5-2640 v4; 2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04



Detailed configuration drawing




#Flashstorage
#StorageAreaNetworks
#Storage
#Power9
#cognitivecomputing
#FlashSystem
#PrimaryStorage
#NVMe-oF
#Softwaredefinedstorage
0 comments
1 view

Permalink