Build or buy: A tale of two all-flash strategies

Not a simple decision


Guest column Ever since flash was first added to traditional storage arrays back in 2008, there has been vigorous debate on the best approach to storage architectures for all-flash systems. There are a number of opposing camps, each with their own perception of the right way forward. Some vendors chose to use existing hardware and amend it to work with all-flash devices.

This approach has been taken by 3PAR and NetApp (with AFF, All-Flash FAS). Startups have developed "built from the ground up" solutions that are designed specially around the foibles of solid state disks (SSDs) and look to deliver consistent performance and low latency by second-guessing how the SSD controller manages the NAND chips embedded in the device. Examples of these models are Kaminario, XtremIO and obviously the most recent hybrid to all-flash converts, Tintri and Nimble. Finally we have the fully custom solutions, like those from Violin Memory, HDS (in the VSP) and now FlashBlade from Pure Storage.

Building SSDs is hard – true or false?

For the longest time, SSD manufacturers have been telling us how difficult it is to build a solid state disk device and efficiently manage all that pesky NAND inside. As an example, I recommend going back and looking at SanDisk's presentation on Guardian Technology at Storage Field Day 5, almost two years ago in April 2014.

Guardian is the underlying IP that manages the lifetime of the NAND in SanDisk's products and has some pretty intelligent algorithms to detect and optimize the wear of NAND cells. As we should all know, NAND memory degrades slightly every time it is written to, so we want to optimize/reduce the number of physical writes as much as possible.

So far in the market we've seen only a few companies develop bespoke all-flash systems based on NAND, most notably Violin Memory (with their VIMM technology), HDS (with FMDs, flash module devices) and now Pure Storage with the NAND inside FlashBlade. So developing custom NAND flash hardware should be hard – or is it? The implications from discussions with Pure Storage at its recent //Accelerate 2016 event imply that the process isn't as difficult as you would think.

FlashBlade implements a custom NAND design in a scale-out architecture based on multiple compute/storage blades in a 4U appliance. The persistent memory component of each blade accommodates either 8TB or 52TB of flash, with integrated NVRAM (9GB or 3GB) and a custom FPGA and dual ARM processor cores (to manage the NAND).

Why has Pure taken this approach?

I was fortunate to be able to spend time with the best minds in Pure Storage – including John Colgrove, John Hayes and Brian Pawlowski – a number of product engineers, and a room full of fellow bloggers. In our discussion (which lasted a couple of hours) we dug into some of the design strategies the company has chosen in bringing FlashBlade to market. Firstly, the design of FlashBlade was done to reduce costs. Building a custom NAND device allowed Pure to eliminate other expensive components like interconnects, SAS expanders and so on. The ARM cores and FGPA allow things to be programmable for the future, implying the ability to use future NAND and solid state products when they are available.

Development of FlashBlade was done in a separate team within the company – a startup within a startup – in order to continue to get the benefits of being fully focused on one platform. New people were brought in, including experts with 20 years' knowledge of signal processing to work on the error-correction algorithms.

So custom all-flash designs are not necessarily a bad thing. In fact, building a custom "SSD" is much easier than developing a hard disk drive from scratch. For companies like Pure, the profile of I/O written and read from each blade is well understood and so the algorithms managing the NAND don't have to be capable of dealing with every possible workload scenario, as is the case with generic SSDs. This implies the FGPA logic can be much simpler and easier to develop.

Reinforcement and contradiction

By coincidence, Pure's //Accelerate 2016 conference was immediately followed by another Storage Field Day, with presentations from NetApp and Violin Memory.

Violin has been developing custom solutions for many years, starting with the 1010 DRAM array launched back in 2007. During its presentation at Storage Field Day 9, Violin founder Jon Bennett showed us their "museum" copy of the 1010, based entirely on DRAM, with no flash and no processors. The current technology uses VIMMs (Violin Intelligent Memory Modules), a PCIe-based memory card that combines 16 Toshiba flash NAND chips into an intelligent AIC – 64 of which make up a Violin appliance.

Previously I've been critical of the custom approach Violin has taken, mainly because other startups have managed to innovate faster than Violin appears to have done. I personally believe that position to hold true, as Violin still focuses on very high performance and low latency as their core values. Other vendors are moving to use cheaper and more cost-effective TLC technologies that don't appear to be on Violin's roadmap.

NetApp recently acquired SolidFire, an all-flash startup, and has also been pushing the flash message with their AFF – All-Flash FAS systems. To be fair to NetApp, the company has had some kind of flash strategy for years, initially based on flash as a caching tier (PAM and Flashcache products). More recently, the strategy has evolved to have flash as a tier of storage in the FAS series and now as all-flash devices (EF and AFF).

I asked Dave Hitz (the NetApp founder) why the strategy had changed (you can watch the video here; my question is about 8 minutes in). He candidly pointed out that NetApp's strategy was initially right when flash was expensive, but didn't evolve quickly enough when flash became more readily available and cost effective.

So NetApp provides the contradiction point that all-flash systems have to be bespoke. Its platforms (AFF, EF, SolidFire) all use commodity SSDs, but the architecture dictates the specific requirements to the customer. AFF accelerates traditional workloads; EF provides the super-fast no-frills low-latency experience; SolidFire offers a scale-out solution with QoS and tight service-provider-type management integration.

The architect's view

Having presentations from Pure, Violin and NetApp/SolidFire all in the same week helped to highlight the different architectural design decisions vendors are making. If you have the skills to do it, custom designs can apparently be done cost-effectively, although using SSDs perhaps provides a faster route to market when you're developing your first product. Of course, focusing on the technology implementations of flash doesn't address the needs of the application, and ultimately, the business.

The bigger picture is how those architectures will be used to deliver what the business needs. NetApp has acquired SolidFire to provide a broad portfolio of products to meet traditional, performance and scale-out needs. Pure Storage has brought out FlashBlade to attack the high-performance file and object market. The nuances of the implementations are nice to know, but ultimately we care about features and price.

I don't think we'll ever get to an answer on buy or build for flash. Each of the platforms discussed here have specific benefits that will resonate with potential customers in different ways. However, I would suggest that the philosophy of these vendors tells us more about their future intentions and likely success in the market.

The videos from last week are well worth watching: you can find footage from Pure //Accelerate 2016 here, and Storage Field Day 9, here.

What's your opinion – build or buy? ®

Chris Evans is an IT consultant with over 28 years of experience. He blogs over at Architecting IT, which was formerly known as The Storage Architect.


Other stories you might like

Biting the hand that feeds IT © 1998–2022