This article is more than 1 year old
Is flash a cache or pretend disk drive?
Or is this the wrong question for Fusion-io?
Interview Should you use flash solid state drive (SSD) storage as a pretend hard disk drive or as a cache attached to a server’s main bus?
These are two approaches that have emerged about using flash in large scale storage applications. EMC, with the help of STEC, says to use drop-in Fibre-Channel-attached SSDs, which function like very, very fast Fibre Channel hard drives as a small but significant tier zero of storage in large disk drive arrays. IBM, with the help of Fusion-io, thinks you should provide a PCI-e link to separate SSD storage as demonstrated in Project Quicksilver with 4TB of Fusion flash.
Because of its PCI-e bus connect Fusion-io has been thought of as server-accelerating flash and not storage array flash. Not so, says Rick White, one of the three founders of Fusion-io and its chief marketing officer. In the Quicksilver project the flash is a storage array but is connected to an IBM x server's bus. The x server functions, in effect, as a storage array controller.
In this interview Rick White sets out Fusion-io’s approach in the server-vs-storage and SSD connection areas. His replies have been edited to bring out what we think are the main points.
El Reg: What are the issues hard drive storage array vendors should consider when thinking about flash-enabling their storage arrays?
Rick White: The cost of NAND flash is the same no matter where it is deployed in the storage infrastructure. What is different is how effectively it is utilized, and the cost of connecting it in.
Making NAND flash connect up like disks do, behind buses and protocols designed for slow mechanical disks, simply wastes much of the medium’s benefit and increases the cost of connecting it in. Putting NAND flash more directly on the PCIe bus, on the other hand, reduces cost and enhances the capabilities inherent to NAND flash, regardless of whether that's the PCIe bus of a server or the PCIe bus of a storage array appliance (again, see IBM's Project Quicksilver).
El Reg: How would you compare and contrast SSD-accelerating servers and storage arrays?
Rick White: It's not about accelerating servers vs. accelerating storage arrays. It is about putting the NAND flash as close as possible to the bus that is common to both and through which the data must flow anyway. Today that bus is PCIe.
What people are missing is that, inside of all modern storage array infrastructures, there is a PCIe bus that moves data on / off of FC to / from the DRAM caches. Placing NAND flash directly off that same bus is the best answer.
Indeed the difference between server acceleration and storage array acceleration really goes away when one realizes that storage array infrastructure is actually itself made up of servers turned into appliances. These appliances use the same commodity, off-the-shelf Intel/AMD processors, DRAM, PCIe, FC HBAs, etc. (that is also true for EMC and NetApp appliances). IBM drove this point home with Project Quicksilver, actually pointing out that they used standard X series servers as their SVC appliances.
It's even more startling to note that with the performance and capacity density offered by NAND, the differentiation between a server as a consumer of storage and a server as a supplier of storage simply becomes a question of the software used to export that storage from the box, and how much storage is in the box. NAND gets enough performance density in a standard server to rival specialized storage appliances, where these appliances have to beef up the CPU's memory and PCIe buses to get the throughput.