This article is more than 1 year old

Tall, slim models are coming to take over dumpy SSD territory

We don't need oblong boxes designed for round disks any more

Interview Intel and Samsung have introduced "ruler" format SSDs, longer than the standard 2.5-inch drive format, and with higher capacities. However, adoption has been slow. Supermicro is bringing in ruler SSD-using servers, but few others.

Why is that? Are the formats unsuitable in some way? We asked senior 451 analyst Steven Hill for his views on the new formats and issues affecting their likely takeup.

Intel_ruler

Intel 'ruler' SSD

A brief recap: Intel's ruler is 325.35mm long, 9.5mm wide and 38.6mm high, and 32 fit across the front of a 1U rackmount enclosure. Chipzilla calls it the Enterprise & Datacenter Storage Form Factor (EDSFF).

Supermicro_SSG_1029P_NMR36L

Supermicro server using 36 x Samsung NGSFF drives

Samsung's mini-ruler NGSFF (next-generation small form factor) drive is 110mm long, 30.5mm wide, and 4.38mm high, with 36 of them slotting into a 2U enclosure.

SAmsung_GSFF_Drive

Samsung NGSFF drive card

Hill thinks that there are two aspects to think about with ruler format drive adoption; the format itself and the NVMe access protocol.

El Reg: How do you think these formats compare to the existing  2.5-inch drive format in terms of capacity, space density and power efficiency?

Steven Hill: It was really just a matter of time until the industry came up with a new model for externalising flash. The technology lends itself to a completely new storage form factor because flash offers the option to design long and narrow storage modules rather than being limited to the inefficiency of fitting a round spinning disk in a rectangular hole.

Capacity will only increase with upcoming generations of flash chips, but substantially increasing flash capacity also introduces a new set of challenges in terms of heat management and power envelopes.

The power consumption of today's high-performance and high-capacity flash devices can be quite similar to that of spinning disk; partly because of the challenge of providing increased on-chip processing capabilities to maintain performance as flash capacity increases on a module.

It's possible that flash manufacturers could eventually take a two-tiered approach; offering slower but more energy efficient flash for capacity and faster but smaller modules targeting performance; but that's pure conjecture on my part. There's really no reason to take a "one size fits all" approach to flash capacity/efficiency/performance at a modular level.

El Reg: Do you envisage them being used in JBOFs, storage arrays, hyperconverged systems and/or ordinary servers?

Steven Hill: A really good question, this. The short answer is all of the above, but over time.

Flash is proving to be far more resilient in enterprise applications than anyone expected, so there's no strong reason (other than cost) why flash won't eventually overtake disk.

Of course, maintaining backward compatibility is always an issue in storage, so SAS-based flash is currently the lowest common denominator in terms of a direct replacement for enterprise disk because the SAS abstraction handles the in-band messaging and management; technology that's still under development for NVMe.

SAS has a decade of enterprise-class utilization that works very well in a traditional array model, but NVMe is capable of bypassing the legacy storage model to connect directly to PCIe. This is huge in terms of raw performance, but that's only one factor in the enterprise storage formula that's also concerned with reliability, data protection and management. There's already been announcements regarding 4th-gen 24Gbit/s SAS, so raw bandwidth isn't as much of an issue when comparing performance at device level.

JBOF is easiest, of course, though I think the bigger challenge is how to best use a 1U server that's packed to the gills with an insanely fast petabyte of storage.

El Reg: Is there a bandwidth issue?

Steven Hill: Providing massive internal storage bandwidth is one thing, but there's only so much storage you can consume for production within a 1U or 2U server. An array or SDS application is an obvious choice, but extending all that speed and capacity to external clients is something completely different.

For example, (and please forgive the cocktail napkin math) Intel's design showing 32 NVMe Ruler modules in a 1U would utilize a minimum of 128 PCIe lanes for drive connection alone and could conservatively be capable of 2GB/sec per module on reads. This could theoretically generate 64GB/sec; or 512Gbit/sec of internal storage bandwidth within a single 1U server.

Impressive yes (I'll take two, please), but also somewhat impractical if you can't get it out of the chassis; which leads to our belief that we need to rethink storage in order to make the best use of these new flash technologies.

El Reg: What do you mean?

Steven Hill: I've learned that one of the keys to evaluating storage lies in "chasing the bottleneck" because it moves around based on component, system and application factors. Enterprise storage with this much capacity and performance adds a whole new dimension to that particular challenge; something that Dell EMC learned the hard way with DSSD.

We believe that flash-based storage, and NVMe technology in particular, will eventually create an inflection point for enterprise storage. The speed and capacity of NVMe can't be overlooked, but today, the NVMe ecosystem is nowhere as complete or validated as SAS for enterprise applications.

El Reg: Specifically, do you think Intel's ruler format has a role to play with servers? Ditto Samsung's mini-ruler format?

Steven Hill: Absolutely. Even the cheapest, scruffiest PCIe/NVMe consumer-level drive you can buy today is several times faster than any enterprise spinning disk – and even many legacy arrays – so adoption of NVMe is a no-brainer.

The challenge was externalising NVMe for PCIe and providing hot-swap and error-handling capability that's equivalent to SAS technology. Even though PCIe was already theoretically hot-swappable, the common practice of pulling and replacing persistent storage devices on the fly requires more coordination between BIOS, driver, OS, and application than PCIe was originally designed to handle.

It's not surprising that there's two physical standards in play for externalized NVMe, and I can see values to both.

Based on initial information, both models appear equally suitable from a technical sense, so the key difference between the two is storage capacity vs. physical server real estate.

The Intel Ruler module based on the EDSFF design is physically twice as long as Samsung's NGSFF-based module, which pretty much guarantees there's no potential interoperability, even if the pin-outs were compatible.

The Intel design may make longer-term sense as you look further out to NVMe as a capacity play, but the current focus for enterprise flash is on the performance tier, so Samsung's smaller module length may not be much of an issue from a capacity standpoint, and will take up less server real estate.

We've seen cloud-focused, 1U designs that utilise really long sticks of internal NVMe and connect via 100Gbit Ethernet, but those are optimised for cloud-scale operations. Again, the model for a flash-based capacity tier is only starting to emerge.

El Reg: How do you view Samsung's implementation of an object storage facility on its mini-ruler SSD? Is that something you would use?

Steven Hill: We've been pondering that issue very recently ourselves, and that's not an easy question to answer.

Object storage is one of my key coverage areas, and there's a part of me that likes the idea of individual storage devices with object and IP capabilities, but there's another part that says it doesn't make much sense to bog down every drive with all that extra overhead.

When I looked at concepts like Seagate's Kinetic drive I felt that object abstraction didn't really fit at device level, because enterprise object and SDS systems are designed to utilise and protect stateless storage devices. But new capabilities merit new technologies, so we're going to revisit that in the context of large-capacity flash modules as the economics change.

Then there is the whole "Open Channel" SSD premise pitched by companies like LightNVM and CNEXLabs, who believe it's inefficient to even put a flash translation layer (FTL) on the NVMe module itself.*

Just let software handle EVERYTHING, including direct placement and movement within the physical NAND chips themselves. I can see how it makes sense for some specific workloads, but I think it also puts raw performance ahead of the hardware resilience offered by having individual FTLs at device level.

And then, both device-based Object and Open Channel SSD relies heavily on Ethernet-based networking, which also has an impact on the moving bottleneck and raises obvious questions regarding QOS, security and contention for resources on shared networks.

+Comment

Ruler and mini-ruler SSD format adoption is inextricably bound up with NVMe protocol maturing vis-à-vis SAS. The SAS format is not as fast as NVMe nor does it enable a direct connection to the PCIe bus.

However, server and storage systems using SAS have enterprise features such as dual-port and hot-plug support. Until NVMe gets these and similar features then its adoption will be held back.

It's a general assumption that both the ruler and mini-ruler drive formats will need NVMe access, notwithstanding a coming 24Gbit/s SAS protocol. Hence their adoption is being slowed by NVMe's lack of such enterprise features.

This probably explains, firstly, why mainstream server and storage array vendors have not adopted them yet, and, secondly, why only Intel and Samsung have developed the formats. The other SSD vendors are waiting to see what happens before deciding if or when and which way to jump. ®

Bootnote

*IBM researchers are also looking at host-based FTLs.

More about

TIP US OFF

Send us news


Other stories you might like