An all-flash data centre? It’s an intriguing idea. The hardware and software components may be there, but what about the business case, including the overall price/performance and total cost of ownership?
Yes, such a data centre will be free of power-gobbling and rack space-consuming spinning disk enclosures, bringing consequent savings in cooling, but storing data bits in NAND costs more than writing them onto disk-platter recording media.
Let's look at where flash can realistically substitute for disk in data centres and where it cannot, moving from high-access rate and latency-sensitive applications to low-access rate ones – the spectrum from in-memory to offline data, in other words.
With data that is high value and needed very fast, flash is replacing performance disk – 15,000rpm and increasingly 10,000rpm drives at the moment. It is starting to appear as DIMM-connected flash, using the CPU memory bus and providing the lowest latency access outside of DRAM, meaning roughly 5-10µs write latency.
Reading and writing
NAND is also appearing on PCIe-connected flash cards, with Fusion-io as the perceived market leader. Access latency is in the 75μs area, say seven times slower than flash DIMMs.
A server using disk instead of DIMM or PCIe flash would have data-access latency of 5-10 milliseconds, which is about 1,000 times longer than DIMM flash and 100 times longer than PCIe flash.
Disk-data access latency is the time needed for a disk's read/write head to move across the platter's surface to the right track and then for the right section of the track to move under the head. This waiting for mechanical events takes an age compared with solid-state data access.
There are a few software products that turn server-attached flash into storage memory and so help avoid data passing through the host operating system's disk-based IO subsystem, speeding up data access.
Where applications, such as financial trading ones looking at arbitrage opportunities, require extremely fast compute times then avoiding latency delays is crucial and the cost of the flash is easily justifiable.
How about where a group of servers need a shared storage resource? Can that be constructed like a virtual SAN from the individual servers' flash storage?
None of the hyper-converged server/storage/system appliance vendors do this yet. You may be able to put together HP Proliant server configs with the P4000 StoreVirtual VSA. Similarly you could use Atlantis USX with an all-flash server config and construct a cross-server flash storage pool.
But these are not every day systems. Generally, no all-flash clustered server storage systems are available – but you can build them, or have them built, possibly by an HP or Atlantis channel partner.
Life becomes easier if you need a networked all-flash array. There are at least 14 suppliers with products ranging from newly designed arrays through to older arrays in an all-flash configuration.
Startups such as Pure Storage and SolidFire have arrays designed from the ground up which are relatively light on data management services. Mainstream suppliers such as Dell, HDS, HP and NetApp have developed all-flash versions of their existing arrays which inherit the arrays’ data management services.
For example, NetApp's EF-Series is an all-flash array based on the E-Series and you can also select all-flash versions of the Data ONTAP FAS arrays which provide all of ONTAP's data management services.
It appears that although all-flash mainstream arrays are possible, most deployments of the arrays are hybrid, using both flash and disk, because although disk access is slower flash access costs more.