NVMe too brigade update: DDN sees limited appeal in NVMe fabrics

The drives? For sure

1 Reg comments Got Tips?

Interview How does HPC array vendor DataDirect Networks view NVMe drives and NVMe over Fabrics?

We are running a series of interviews with vendors as NVMeF is looking increasingly likely to be the future way to access Fibre Channel SAN storage, replacing the current Fibre Channel implementations, either the hardware or the HBA software.

However NVMeF has less relevance to file-level access as DDN’s SVP for global sales, marketing and field services, Robert Triendl, makes clear in this interview.

El Reg: Will simply moving from SAS/SATA SSDs to NVMe drives bottleneck existing array controllers, and must we wait for next-generation controllers with much faster processing?

Robert Triendl: For most vendors, very certainly. I don’t think we have this issue (we use pretty much the latest CPUs and server boards available all the time). Our SFA architecture was built with low latency, high IOP devices in mind. We scale very efficiently with multi-core processors as compared to traditional single-threaded, embedded architectures, which must rely on clock speed.

The primary impediment is how to couple this increased speed from low latency, high IOP devices through the file system and to the end user.  The DDN IME product is a good example of this, and DDN is also working on several features in the SFA product and the file system to alleviate this bottleneck.

El Reg: Will we need affordable dual-port NVMe drives so array controllers can provide HA, and what does affordable mean?

Robert Triendl: We have been shipping dual-ported NVMe drives in one of our products for about a year (and we were probably the first vendor to ship such a product in late 2015); however, the product using these drives (the IME 14K) provides a file-level interface, with a special client module, rather than a block interface (e.g. with NVMeF), all over RDMA fabrics (IB or OPA), but obviously not NVMeF.

We have recently switched to single-ported NVMe drives for this particular product, since we do not require dual-ported NVMe drives for HA (rather, data is erasure-coded over the network in a quite clever way, so single-ported drives are just fine). The reasons are cost and availability.

Right now, dual-ported drives are only becoming available in larger quantities and remain very, very expensive, with a steep premium over single-ported devices. By contrast, single-ported devices have been in the market for years. There is no question that single-ported NVMe drives are becoming the new “SATA” (the SSD in any most recent PC or even mobile device will use NVMe, rather than SATA) and we believe cost differences will remain significant for some time to come, thus our focus on single-ported NVMe devices.

That said, we are planning a revision of the SFA architecture with dual-ported NVMe devices for later 2017, to take advantage of the broader availability of these devices. Our architecture with the SFA14K already physically supports the dual-ported NVMe devices.

We see the dual-ported NVMe market today having artificially inflated costs since volume is just now starting to increase.  Over time, we expect the dual port to become an every unit item (like SAS) with no real price premium, especially since it uses essentially the same silicon as the single port versions. The dual-ported devices will be a necessity for the very high performance array controllers that maximise performance while still providing high availability.

El Reg: Are customers ready to adapt NVMeF array-accessing servers with new HBAs and, for ROCE, DCB switches and dealing with end-to-end congestion management? Do they need routability with ROCE?

Robert Triendl: We are starting to see some inquiries and some limited demand, for some exotic solutions. The technology remains somewhat early stage and immature, so we expect broader adoption to ramp-up in 1-2 years. Use cases for adoption will vary; for a pure SAN-like fabric routing is less of an issue, but for a larger fabric connecting a large number of servers or VMs this will certainly be important.

The NVMeF standard provides a main benefit in traditional large cloud infrastructures that are trying to disaggregate hundreds of individual storage devices with thousands of servers with any-to-any connectivity.  In high performance array devices, there is no such requirement, and direct NVMe-connecting (or SAS-connecting) the external devices provides the highest performance at the lowest cost.

For storage devices that sit below a file system layer, NVMeF is probably irrelevant. However, we are certainly looking at ways to interface IME even more closely with physical NVMe devices (IME talks directly to the device, with no additional layer in between) and NVMeF might be an interesting approach for a storage fabric below IME. Again, this is investigation at this stage.

El Reg: Could we cache inside the existing array controllers to augment existing RAM buffers and so drive up array performance, with flash DIMMs say? Or XPoint DIMMs in the future?

Robert Triendl: I would believe that the bottleneck in the controllers is more with processing and software architectures that can use large, multi-core CPUs, than with memory or memory bandwidth. That said, any kind of flash-array that uses log-structuring will require extensive data structures in memory and flash DIMMs or XPoint will provide an interesting approach to extend memory capacities. We see multiple uses for technologies such as 3D XPoint, including user data hierarchical storage as well as data structure memory. Keep in mind that you will also need technology to make this memory survive a single point of failure.

El Reg: Does having an NVMe over fabrics connection to an array which is not using NVMe drives make sense?

Robert Triendl: Well, if the technology were mature, you might argue for it, but it isn’t, so I feel this is rather unlikely to happen. I am sure you will remember the FcOE dance…


Robert Triendl summed up DDN’s approach to NVMe this way: “While we have embraced NVMe, we remain a bit skeptical, but we are certainly looking at various options to use NVMeF, perhaps less in a classical array approach then in an SDS approach.”

NVMe drives are used and valued by DDN but NVMe over fabrics is a technology that won’t sweep its customers by storm. ®


Biting the hand that feeds IT © 1998–2020