This article is more than 1 year old

How flash is panning out in the enterprise

Flash! A-aaaah! King of the Impossible!

Flash storage was once a plaything for moneyed-up, high-performance tech elite. No more. Now, it’s finding its way into Joe Average’s enterprise architecture. Here’s where it came from, where it is today, and where it might be going.

Today, you’ll find flash memory in most mobile devices, but its history is a long one, extending back far beyond the smart phone. Back in 1967, Bell Labs invented the floating gate memory device, although it wouldn’t be until the late 70s that we would first see non-volatile RAM on the market.

Intel didn’t begin developing flash until the mid-80s, mostly based around NOR logic gates, and it was this memory format that began finding its way into early SSDs and some of the first pocket-sized computers back in the early nineties.

The alternative NAND flash that would come to dominate the market began trickling out in the mid-90s from firms like Samsung and Toshiba. Unlike NOR flash, it was only block-addressable, meaning that software had to write to it in chunks rather than on a byte-by-byte basis. On the plus side, it was faster and cheaper, which would help propel the market forward.

While flash would find its way into consumer-grade mobile devices and USB drives, the enterprise wouldn’t pick it up for back-office infrastructure until much later.

“It really started hitting his stride a little over a decade ago,” says Greg Schulz, advisory analyst at StorageIO. “Once the media became reliable enough and it became cost-effective and plentiful, then it became viable for use in the enterprise.”

Rapid adoption

Today, flash is still considered an expensive technology, explains Tom Coughlin, storage analyst and president of Coughlin Associates. “It’s still more expensive for capital costs than buying hard drives, so it’s going to be used for high-performance applications,” he says.

Nevertheless, he sees the world moving to increasingly flash-based environments. “We are moving into a flash-first world, with flash as primary storage, but hard disk drives are still secondary storage for many people,” he says.

Figures bear this out. IDC’s disk storage systems tracker noted a 40.1 per cent growth for all-flash arrays (AFAs) in the EMEA marketplace during Q3 2017, giving them 28 per cent of all external storage sales. This compares to a 15.9 per cent contraction in hard drive sales.

AFAs are gaining significant traction over hybrid arrays, too. The IDC numbers show hybrid arrays growing at a far slower rate than AFAs, clocking a mere 15.3% increase in EMEA during Q3.

Coughlin believes that hard drive vendors are largely giving up on this spinning medium as a tier-one, high-performance storage mechanism. We’re in last generation of 10-15,000 RPM drives, he reckons, and the vendors will move into SSD instead.

Changing workloads

Bryan Betts, principal analyst at Freeform Dynamics, doesn’t look at specific market numbers, but he does talk to many customers who have bought flash storage for the enterprise. AFA shifted to become the favourite last year, he says.

“We all hear the stuff from companies about what AFA is good for. People use it for databases and hosting virtual machines, and all that good stuff,” he says. “What’s interesting is when you look beyond that and ask what they use it for that they didn’t expect to use it for at the start.”

AFA is becoming a powerful engine for big data analytics, he says. It is also making significant headway in cloud environments, not least because of its self-managing capability. AFAs have evolved from being simply big disk arrays to being self-managing boxes with features like automation and self-tuning storage, making it useful for some environments.

He is also seeing AFA beginning to take over mainstay functions such as rapid backup and recovery. Storing and retrieving disk snapshots quickly is a good use of the tech, he says.

He also sees IT departments using AFA for file servers, and for email collaboration and workflow. “It’s because they are fairly heavy-duty applications these days,” he reminds us. “Once you add in the ability to the duplicate and compress, this stuff isn’t much more expensive than primary disk.”

Interfaces

NVM Express (NVMe) has become the most popular way to connect SSDs to host machines. Early SSDs would connect via HDD-era SCSI or SATA interfaces, which were designed when spinning media allowed for more IO latency. Flash storage throws out data as quickly as the physical bus and controller protocol can gulp it, so the world needed a new protocol to cope with its better IO performance envelope.

NVME is designed to support flash’s rapid IO, and enables flash memory to talk directly to the CPU via the physical PCIe bus. It has been a great way to speed up communications with flash memory inside client devices and servers, which makes it a no-brainer for direct-attached storage.

Those companies wanting to use flash for shared storage will still face bottlenecks as they turn to the perennial favourites for connecting to external arrays: iSCSI, Ethernet, Fibre Channel and in rarified cases Infiniband. These were all defined during the hard drive era, though, and don’t take full advantage of SSD’s low latency, parallel read/write capabilities. This can leave virtualized machines twiddling their digital thumbs as they wait for storage IO.

This has spawned a new movement to connect external shared arrays to servers using NVMe. Say hello to NVMe over Fabric (NVMeF).

NVMeF runs over Ethernet (RoCE or iWARP), Fibre Channel, or Infiniband for high-performance computing types. Of those three, Fausto Vaninetti, director and secretary of the Storage Networking Interface Association (SNIA), likes Fibre Channel. He argues that while Ethernet is everywhere, it won’t offer the performance of Fibre Channel-based NVMeF, even in its latest enterprise configurations.

“The reason to bet on Fibre Channel is that it is very popular,” he says. “Every disk array has the capability to support Fibre Channel connectivity. The Fortune 500 customers all use Fibre Channel as the main technology to access storage.”

The future of flash

What can the world expect in the future as flash takes off? Vendors are already rolling out their implementations of NVMeF. As it takes hold, it could herald in a new approach to AFA architectures. Experts see several options.

Vaninetti sees basic functions like garbage collection (a big component of flash media management) moving from the SSD packaging to the controller, to be handled at the disk array level.

“That means flash media is not really packaged inside an SSD. You have the controllers talking directly to the flash media and bypassing the other elements that are normally part of the SSD,” he suggests.

The Register’s Chris Mellor suggests that NVMeF setups using remote direct memory access (RDMA) could enable the server CPU to talk directly to the flash media and even manage it using this technology, cutting out controllers altogether and creating just a bunch of flash drives (JBOF) with a skeletal NVMe interface.

Coughlin says that this is a possible trend, but another is that flash memory will remain direct-attached, with servers using RDMA to access each others’ memory over the network.

At the other end of the spectrum lies the prospect of even smarter AFA boxes. Freeform Dynamics’ Betts argues that we may see next-generation AFA boxes appear that adapt to new forms of solid state storage such as 3D XPoint as they are added, tiering the different kinds of storage appropriately.

Expect to see various configurations of these over time, serving individual use cases. The bottom line: if you work in IT, then over the next few years there’s going to be an awful lot more flash in your life, whatever box it arrives in.

More about

TIP US OFF

Send us news