This article is more than 1 year old

Hello, SAN-shine. You and NVMe are going to have a little chinwag

El Reg chats to Nimble man about coming wave of NVMe adoption

Analysis A sea change is gathering pace in storage, powered by NVMe drive-level and fabric-level connectivity, the two declaring war on data access latency, and combining to bring data closer to compute and get more applications running in servers faster.

The change we are facing is that from the SAS/SATA drive array accessed over Fibre Channel or iSCSI to one accessed across an NVMe fabric and using NVMe drives inside. And there is another change going on, which is the rise of hyper-converged infrastructure (HCI) systems with virtual, not physical, SANS. Although such HCI systems will eat into the physical SAN market it's not likely to destroy it and the SAN market is going to stick around for many years, especially if its data access latency disadvantages can be removed.

NVMe storage nirvana, the combination of NVMe-accessed drives asnd NVMe over Fabrics-accessed shared storage arrays, is not a simple plug-and-play style change. A series of steps are needed to build a staircase to NVMe heaven and enable a general adoption of NVMe storage. We're asking people in the industry what they think of these steps and and how general NVMe storage adoption might take place.

The suppliers we are approaching include Dell EMC, E8, HDS, HPE, Huawei, IBM, Kaminario, Lenovo, Mangstor, NetApp, Nimble, Pure, Tegile and Tintri; all of them shared storage array suppliers.

Dimitris Krekoukias is a global technology and strategy architect at Nimble Storage and here are his ideas about NVMe adoption, which he emphasises are his general ideas and not to be taken as indicative of Nimble Storage plans or intentions. They are his personal views of how the storage array-using community might adopt NVMe.

El Reg What are NVMe's advantages?

Dimitris Krekoukias NVMe is a relatively new standard that was created specifically for devices connected over a PCI bus. It has certain nice advantages versus SCSI such as reduced latency and improved IOPS. Sequential throughput can be significantly higher. It can be more CPU-efficient. It needs a small and simple driver, the standard requires only 13 commands, and it can also be used over some FC or Ethernet networks (NVMe over Fabrics). Going through a fabric only adds a small amount of extra latency to the stack compared to DAS.

El Reg Why and where should we use NVMe drives now?

Dimitris Krekoukias NVMe drives are a no-brainer in systems like laptops and DASD/internal to servers. Usually there is only a small number (often just one device) and no fancy data services are running on something like a laptop... replacing the media with better media+interface is a good idea.

For enterprise arrays, though, the considerations are different.

El Reg Why are NVMe drives in shared drive arrays a problem?

Dimitris Krekoukias Tests illustrating NVMe performance show a single NVMe device being faster than a single SAS or SATA SSD. But storage arrays usually don't have a single device and so drive performance isn't the bottleneck as it is with low media count systems.

The main bottleneck in arrays is the array controller and not the SSDs (simply because there is enough performance in just a couple of dozen modern SAS/SATA SSDs to saturate most systems). Moving to competent NVMe SSDs will mean that those same controllers will now be saturated by maybe 10 NVMe SSDs. For example, a single NVMe drive may be able to read sequentially at 3GBps, whereas a single SATA drive does 500MBps. Putting 24 NVMe drives in the controller doesn't mean that magically the controller will now deliver 72GBps. In the same way, a single SATA SSD might be able to do 100,000 read small block random IOPS and an NVMe with better innards 400,000 IOPS. Again, it doesn't mean that same controller with 24 devices will all of a sudden now do 9.6 million IOPS!

El Reg Are there other array-level NVMe drive problems?

Dimitris Krekoukias Current NVMeF arrays prioritise performance and tend not to have HA, very strong RAID, multi-level checksums, encryption, compression, data reduction, replication, snaps, clones, hot firmware updates. Or the ability to dynamically scale a system.

Dual-ported SSDs are crucial in order to deliver proper HA. Current dual-ported NVMe SSDs tend to be very expensive per TB versus current SAS/SATA SSDs.

El Reg How can we fix these issues?

Dimitris Krekoukias Due to the much higher speed of the NVMe interface, even with future CPUs that include FPGAs, many CPUs and PCI switches are needed to create a highly scalable system that can fully utilize such SSDs (and maintain enterprise features), which further explains why most NVMe solutions using the more interesting devices tend to be rather limited.

There are also client-side challenges.

El Reg What client-side challenges?

Dimitris Krekoukias Using NVMe over Fabrics can often mean purchasing new HBAs and switches, plus dealing with some compromises. For instance, in the case of RoCE, DCB switches are necessary, end-to-end congestion management is a challenge, and routability is not there until v2.

El Reg So how can we take advantage of NVMe without taking away business-critical capabilities?

Dimitris Krekoukias Most customers are not ready to adopt host-side NVMe connectivity – so have a fast byte-addressable ultra-fast device inside the controller to massively augment the RAM buffers (like 3D Xpoint in a DIMM), or, if not available, some next-gen NVMe drives to act as cache. That would provide an overall speed boost to the clients and not need any client-side modifications.

An evolutionary second option would be to change all internal drives to NVMe, but to make this practical would require wide availability of cost-effective, dual-ported devices. Note that with low SSD counts (less than 12) this would provide speed benefits even if the customer doesn't adopt a host-side NVMe interface, but it might be a diminishing returns endeavor at scale, unless the controllers are significantly modified.

El Reg And when customers are ready and willing to adopt NVMe over Fabrics?

Dimitris Krekoukias In this case, the first thing that needs to change is the array connectivity to the outside world. That alone will boost speeds on modern systems even without major modifications.

The next step depends on the availability of cost-effective, dual-ported NVMe devices. But in order for very large performance benefits to be realized, pretty big boosts to CPU and PCI switch counts may be necessary, necessitating bigger changes to storage systems (and increased costs).

Comment

Fibre Channel SANs have provided enormous benefits over the past decade and more but they ae rooted in the disk and pre-virtualised server era. In today's VM and developing containerised server world with multi-socket, multi-core CPUs and flash drives, Fibre Channel and disk-based drive interfaces are stone age.

NVMe drives promise to replace SAS and SATA interface media while NVMe over Fabrics promises to replace Fibre Channel and even iSCSI. We're at the start of a tipping point and if the tip happens then the storage world will be radically different, and better. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like