Good-on-paper FlashBlade: We've seen the hardware, we've touched the blades
Are there buyers for Pure's full-rack systems?
Backstory It's not vapourware, this scale-up Ethernet cluster-in-a-box that Pure calls FlashBlade, but it's sure not hurrying to market. Announced yesterday, March 14, it's expected to be generally available by the fourth quarter, although we understand Pure engineers could take until the end of the year to finally clear it for GA.
The hardware, the blades exist. We've seen it and handled them. Pure says it has been working on it for 24 months or more and the company staff are making judicious statements about it and what it can do and when it will be able to do it.
This is no finished product ready for explosive growth, but it's ready for a wider public gaze and beta testing now that alpha testing is over.
Thus, at GA, only one FlashBlade enclosure (box) will be supported. Two nodes are being tested. A while later, called GA-plus, two units clustered together will be supported. Then more will be added, with the aim being to cluster ten, a rackful of them.
Pure hasn't proved to its own satisfaction that there is a current market for full-rack FlashBlade systems, though it is convinced there will be one in the future. It says that FlashBlade use-case customers, such as chip designers, face doubling and re-doubling of data needs as design iterations evolve.
They want to bring 100,000 cores, CPUs even, to bear on a simulation run and need a storage resource that can feed them with the data and write calculated data fast enough.
Compute resource is available. Networking is fast enough. In this area it's the storage that's bottlenecking users, and FlashBlade is designed to blow this bottleneck away. It will be able to ingest Internet of Things data, and support its processing while ingesting more data.
There have been interesting and intriguing choices made about its design and implementation. For example, the NAND is not Samsung 3D V-NAND TLC flash as is used in the existing FlashArray//m systems. There isn't enough of the stuff being made and nor are the prices as affordable as Pure would like.
Some wonder if HPE has bid good prices for the V-NAND it needs for its StoreServ arrays. What Pure said is that it's using planar MLC NAND from Toshiba and Micron.
There is not enough CPU power on the blades to run application software as well as the storage software. We're told that, per-TB, there is about half the CPU power found in the FlashArray systems. The aim is to add storage access protocols rather than running apps that process stored data in some way.
Although ... Pure could build processor-only blades to scale performance independently of capacity if it wanted.
Protocols and the public cloud
NFS v3 support is ready now and was chosen because it is the most popular file system access method in the target market, which we can generally characterise as commercial HPC, with chip design and simulation being a characteristic use-case. NFS v3 is needed now. S3 protocol support will be added around GA time, with CIFS/SMB being added after that, and HDFS also on the list.
Even though S3 support is coming, there is no intent to offer AWS cloud backend support for cool or older data. We're told that data can cool until it's needed and then it has to be hot, in FlashBlade, and ready. Fetching it from AWS would take time and cost money, hence no tiering to the cloud.
FlashBlade is basically a distributed object store, there being no underlying file system.
Ethernet was chosen to link blades and enclosures because it is very fast once you move away from legacy network stacks.
FlashBlade should be much faster than Isilon systems of similar capacity. Pure implied that it thought Isilon's CIFS implementation was poor, and staffers said they didn't want to get their CIFS functionality and its embodiment to be anything but excellent.
Will FlashBlade be faster than Scality software running on all-flash HPE server nodes? Pure didn't know.
As a commercial HPC system, will Pure meet DataDirect Networks, which is also pushing its HPC-based technology into enterprises? We're told by Pure peeps that it expects to meet competitors in proportion to their usage by customers in the target market, and therefore suppliers like NetApp will be encountered rather than, it was implied, smaller players like DDN.
Over time, as flash costs fall, FlashBlade systems could become backup targets. They could also be used to ingest data and support streaming access, then pumping out data to FlashArray systems for them to support random access needs. Pure could see the two system types exchanging data back and forth in the future.
FlashBlade is a work in progress, with its developing technology iterations being validated by alpha testing customers, such as Mercedes and a chip designer or two. Pure is pioneering here, convinced there is a market developing in front of our eyes, and getting customers on board to validate and help prove this supposition. It's more concerned to get things right than rush product to market. There are no competitors breathing down its neck, yet, and it can take this tack, building a strong and properly engineered product.
By the time it is in full GA, possibly early 2017, then competitors will still be 24-36 months behind, with ones using commodity hardware – bare X86 servers and SSDs with no HW specialisation – at a performance disadvantage.
EMC's DSSD is a scale-out, rack scale flash system for extreme latency-sensitive applications. FlashBlade is, in this incarnation, a scale-up object storage/NFS filer for less latency-sensitive apps, but ones that need far more performance (IOPS and bandwidth and rackspace needs) than a disk-based system or a hybrid SSD/disk can deliver, or even a full all-SSD box. As it develops it adds progressively more scale-out capabilities and extends its file-support/access universe out to include CIFS/SMB, S3 and HDFS, making it a more rounded system that can attack more use cases.
This is not a server-centric storage system. Only storage technology companies with both flash hardware and software expertise could develop competing technology, for now, and that means, we think, NetApp/Solidfire, and Dell/EMC, and, if it scrapes through current tribulations and gets the financing needed, Violin Memory. It's hard to think of others, although both HDS and HPE could surprise us, if they bit the bullet and said we need more than one basic flash hardware storage system design.
A wildcard might be a Qumulo-Kaminario combination. What interesting times we do live in – it's a pure delight for a storage fan. ®