This article is more than 1 year old

The God Box: Searching for the holy grail array

Latency killing super spinner

It's so near we can almost smell it: the Holy Grail storage array combining server data location, solid state hardware speed, memory access speed virtualisation, and the capacity, sharability and protection capabilities of networked arrays. It's NAND, DAS, SAN and NAS combined; the God storage box – conceivable but not yet built.

We can put the Lego-like blocks together in our minds. The God box building blocks are virtualised servers; PCIe flash; flash-enhanced and capacity-centric SAN and NAS arrays and their controller software; atomic writes; flash memory arrays; and data placement software. The key missing pieces are are high-speed (PCIe-class) server-array interconnects and atomic writes - direct memory to NAND I/O.

The evil every storage hardware vendor is fighting is latency. Applications want to read and write data instantly. The next CPU cycle is here and the app wants to use it and not wait for I/O. Servers are becoming super-charged CPU cycle factories, and data access I/O latency is like sets of traffic lights on an inter-state highway: they just should not be there.

Killing latency

I/O latency comes from three places broadly speaking: disk seek times, network transit time, and operating system (O/S) I/O subsystem overhead. The disk seek time problem has been cracked; we are transitioning to use NAND flash instead of spinning disk for primary data, the hot, the active data. Disk remains as the obviously most effective large-scale media for data, particularly if it is deduplicated. Flash cannot touch it.

There have been four ways of doing this:

  • We are seeing SSDs slotted into hard disk drive (HDD) slots, with data placement software, like FAST VP, automatically moving data between HDD and SSD as its 'access temperature' rises and falls.
  • We are also seeing flash used as an array controller cache, with NetApp's FlashCache and EMC's FAST CACHE.
  • We are seeing newly architected flash and HDD arrays which do a better job, they say, of using flash storage and HDD capacity together. Think NexGen Storage, Nimble Storage; and Tintri.
  • We are seeing all-flash arrays which abandon disks altogether and rely on deduplication, MLC flash and flash-focused, not HDD-focused controller software, in order to bring perGB cost close to that of disk drive arrays. Think Nimbus, WhipTail, Violin Memory, and startups like Pure Storage, ExtremIO and SolidFire.

The big "but" with these four approaches is that network latency still exists – as does the I/O latency from the O/S running the apps. These four approaches only go part of the way on the journey to the God Box.

Storage and servers – come together

Network latency is vanquished by putting the storage in the server or the server in the storage. Putting HDD storage in the server, the direct-attach storage (DAS) route gets rid of network latency but disk latency is still present. We'll reject that. Disks are just ... so yesterday, and it has to be solid state storage.

There are two approaches to server flash right now: use the flash as a cache or a storage. PCIe flash caches are two a penny: think EMC VFCache (the latest), Micron, OCZ, TMS, Virident and others. You need software to link the cache to the app and you need a networked array to feed the cache with data. This is only a halfway house again because cache mises are expensive in latency terms.

If it's a read cache then its a "quarterway" house, as writes are not cached. If it doesn't work with server clusters, high-availability, vMotion and/or and server failover then it's an "eighthway" house. Most of these issues can be fixed but there is no way a cache can guarantee cache misses won't happen; it's the nature of caching. No matter that caches connected to back-end arrays can offer enterprise-class data protection; the name of the game is latency-killing and caching doesn't permanently slay the many-headed latency hydra. So the flash has to be storage.

Fusion-io is the leading exponent of putting flash as storage into servers. What about putting servers in storage? DataDirect says it does that already with filesystem applications hosted in its arrays. Okay, we'll grant the principle but not the actuality as non-one is running serious business applications in DDN arrays yet.

EMC is saying that virtualised server apps will be vMotioned to server engines in its VMAX, VNX and Isilon arrays. Okay. This means an exit of network latency and, if the arrays are flash-based with flash-aware controllers and not bodged disk-controller SW, then an exit of drive array latency.

EMC is serious and vocal about this approach so we must pay it heed. And we must note that the flash storage tier can be backed up with massive HDD array capacity and protection features. This is a very attractive potential mix of features, although only for servers in the array - I'm hinting at server supply lock-in here - and only if it becomes mainstream, and if it can get rid of the server O/S I/O subsystem latency.

Next page: Fusion's new stake

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like