Primary Storage

 View Only

Why iSER is the right high speed Ethernet all-flash interconnect today

By Archive User posted Thu July 20, 2017 08:08 AM

  
All-flash storage is bringing change throughout the data center to meet the demands of modern workloads. Fiber Channel has traditionally been the preferred interconnect for all-flash storage. However, 21st century data center paradigms like cloud, analytics, software defined storage, etc. are driving a definitive shift towards Ethernet infrastructure that includes Ethernet connectivity for both server and storage. As Ethernet speeds rapidly increase to 25/40/50/100Gb, it becomes more and more lucrative as an interconnect to all-flash storage. While traditional iSCSI has gained significant ground as Ethernet interconnect to storage, inefficiencies in the TCP/IP stack don’t allow it to be the preferred interconnect to all flash storage.

In comes iSER (iSCSI Extensions over RDMA) that maps the iSCSI protocol to RDMA. iSER provides an interconnect that is very capable of rivaling Fiber Channel as the all-flash interconnect of choice. It leaves the administrative framework of iSCSI untouched while mapping the data path over RDMA. As a result, management applications like VMWare vCenter, OpenStack, etc. continue to work as is, while the iSCSI data path gets a speed boost from Remote Direct Memory Access. A move from traditional iSCSI to iSER would thus be a painless affair that doesn’t require any new administrative skills.

iSER retains all the enterprise class capabilities that are expected off Tier 1 shared storage. It also matches or beats Fiber Channel in terms of access latency, bandwidth and IOPS. Capabilities like multipath IO, SCSI Reservations, Compare and Write, vVols support, and offloaded data copy operations like XCOPY/ODX will work from day one on iSER. In addition, iSER benefits from all the SCSI error recovery techniques that have evolved over the years – things like LUN Reset, Target Reset, Abort, etc. In essence, all enterprise class applications will continue to work as reliably and seamlessly over iSER as they used to work over iSCSI.

The diagram below shows how iSCSI is involved in the iSER IO path only for the Command and Status phases while the Data Transfer phase is totally taken care of by RDMA transfers directly into application buffers without involving a copy operation. This compares well with NVMeF in terms of latency reduction.



NVMe over Fabrics or NVMeF is a new protocol that promises to take all-flash interconnect technology to the promised land of extreme performance and parallelism and there are a lot of expectations from it. It is a protocol still evolving though, and it’s not mature enough to meet the requirements of clustered applications running over shared Tier 1 all-flash storage. And it is a quantum jump that not only expects the user to move to high speed Ethernet technology from Fiber Channel but a totally new protocol with a new, unfamiliar administrative model. It is likely that NVMeF will take some time to mature as a protocol before it can be accepted in data centers that require Tier 1 shared all-flash storage. In addition to that applications must adapt to a new queuing model to exploit the parallelism offered by flash storage.


That leaves iSER as the right technology to bridge the gap and step in as the preferred interconnect for shared all-flash storage today. iSER is ready from day one for latency, IOPS and bandwidth hungry applications that want to exploit high speed Ethernet technology, both as a north-south and east-west interconnect. IO parallelism may not be as high as promised by NVMeF, but it’s sufficient for all practical purposes without requiring applications to be rewritten to fit into a new paradigm.

By implementing iSER today, the move from Fiber Channel to high speed Ethernet can be tried out without ripping out the entire administrative framework or the need to rewrite applications. This is essential because the two lower level protocols that enable RDMA transport over high speed Ethernet for both iSER and NVMeF - iWARP (Internet Wide Area Protocol) and RoCE (RDMA over Converged Ethernet) - are both new and must go through the rigors of evolving as enterprise class storage interconnects just like Fiber Channel did evolved over the years. These protocols must acquire the same level of stability, resiliency, and error recovery capabilities as Fiber Chanel technology. iSER paves the way toward that in a gradual fashion, long before NVMeF will be ready to do so.

At IBM we are working toward enabling our customers to move to data center infrastructure that consists purely of Ethernet interconnects with speeds scaling rapidly from 10 – 100Gbps. Built over iSER, this capability is all-flash storage ready from day one. Agnostic of the underlying protocol (i.e. iWARP or RoCE), it is likely to be very attractive to software defined storage infrastructure that is expected to be built from commodity hardware. It enables IBM Spectrum Virtualize products (IBM Storwize and IBM SVC) to be deployed on cloud infrastructure where Ethernet is the only available infrastructure. And in order to get there, we have partnered with multiple hardware and software vendors that are at the forefront of the high speed Ethernet revolution.

So get ready to experience all-flash storage connected over high speed Ethernet from IBM sometime in the near future!













1 comment
1 view

Permalink

Comments

Tue August 15, 2017 03:13 PM

Hi Subhojit,

Nice article, one question I have about iSER for storage is what is the Windows strategy to use the RDMA connection for performance in that operating system? Windows does not support iSER and it seems that there needs to be some effort to determine how we can connect iSER storage in Windows environments.

Also, when using iSER in VMware environments, is there any special configuration that has to be done in order for VMware management application to behave as they did pre-iSER?

Thanks.