Comment There is a battle going on behind the scenes over the location of storage's soul: the controller hardware and software. Oracle, Dell, EMC and VMware want it to be in the server, while NetApp and HDS want it to be in the array, an array operating with servers but distinct from them.
The picture is not as clear-cut as this on the surface – NetApp is working with Oracle for example – but this is my take on what is happening down in the development depths, among the strategists and engineers with multi-year product horizons.
The modern storage industry, the one shipping networked external storage arrays, has been built on two foundations. One is EMC's establishment of a market for third-party external, block-addressed storage arrays distinct from the server suppliers of the time: HP, IBM, Digital Equipment, etc.
The other was the invention and establishment of file-addressed network-attached storage (NAS or filers). NetApp is the single most effective proponent of that, although EMC grew to ship more filers than NetApp. EMC and NetApp represent the twin peaks of the external storage array.
A storage array comes in two flavours. It is either monolithic, with multiple controllers or engines and some fancy interconnect hardware to link these to the storage shelves – think Symmetrix, latterly VMAX – or modular. Modular arrays have two controllers linked – by simpler Fibre Channel or latterly SAS – to the storage shelves. NetApp's FAS arrays and EMC's CLARiiON are classic embodiments of this idea.
Applications in servers sent SCSI block requests or file access requests to these arrays, which presented themselves, logically, as a single pool of storage, separated into dedicated logical disks (LUNs) for the server apps, or sharable filestores.
This long-lived storage concept is now being discarded, and the first nail in its coffin came from Sun and the inventive Mr Andy Bechtolsheim.
Honeycomb upsets the storage hive
Bechtolsheim's idea was that co-locating servers and storage in the same overall enclosure would speed server apps dependent on lots of stored data. Thumper, a server-rich NAS device delivered as the X4500, was one result of this and Honeycomb another.
Neither set the world on fire but they did show the way to getting more data into servers faster. Then Oracle bought Sun in 2009 and suddenly Bechtolsheim's idea got a rocket boost from the Exadata product, a set of server resources running Oracle software with their own storage resources. This is setting the Oracle World on fire, with much encouragement from Oracle marketing because its own bunch of modular arrays was pretty second-rate.
Oracle Exadata database machine
What Sun invented and Oracle extended is the NoSAN server. EMC has seen this idea and responded by devising an opposite of this, the No-Server SAN, a kind of reverse engineering in its way.
EMC brings the servers to the array
EMC is trying to have it both ways. VMAX, VNX and Isilon arrays are going to be able to run application software in server engines in the array controller complex. There is a natural fit with VMware's ESX running the whole shebang and VMs being loaded to run storage controller software and applications that benefit from low-latency access to buckets of data. These array-located app servers use the array's own internal network or fabric, VMAX's Virtual Matrix for example, instead of the normal Ethernet or Fibre Channel fabric. This isn't SAN access as we know it.
EMC also has its Project Lightning to have its arrays manage the loading and running of flash caches in servers, PCIe-connected flash. That's a step on the road along which Dell appears to be further along. The Round Rock company is also going to build servers with flash, but this is a storage tier and not cache. This tier zero storage is logically part of the entire array controller-managed storage pool with automatic data movement.
The future may be not to have a storage networking protocol at all.
Now EMC may well have this in mind as well, with FAST VP shipping data to and from the server flash which is then not really a cache but tier zero too. However Dell's vision, as I understand it, is to move the storage into the servers and hook it up to the same PCIe gen 3 bus that the flash and server's DRAM hook into. Once again this means that the servers will not use traditional external storage links to access data. This again is not a SAN. What do these NoSAN ideas mean for external array vendors?