Storage architect If you’ve ever had to scope out and purchase enterprise storage, you’ll know what a nightmare it can be. Vendors love to make the process as opaque as possible, with quotes that run into multiple pages and dozens of line items – including bezels and racks.
Consider also the issue of adding more hardware (e.g. drives and controllers) to scale up solutions, where the new hardware needs to co-terminate with the contract/agreement for the majority of the hardware. If you upgrade small and frequently, then contracts can be a complete mess.
Last week, NetApp SolidFire used its annual Analysts’ Day to announce FlashForward, a new way to purchase SolidFire storage that splits the cost of the hardware and software into to separate chargeable items.
This capacity licensing model allows customers to purchase increments of 100TB “software packs” and buy the hardware (currently based on Dell R630 servers) at pass-through pricing.
SolidFire aren’t the first company to introduce flexible purchasing arrangements. EMC used the OpenScale model for many years, allowing customers to use a predictive pricing model. More recently both Kaminario and Pure Storage have introduced schemes to take some of the headache away from the traditional three or four year buying cycle (schemes called Perpetual Array and Evergreen Storage, respectively). So what’s different here?
First of all, we should look at how practical on-demand purchasing actually is. From a customer perspective, the idea sounds great; buy capacity on demand and add it to the storage deployed on the floor just as it’s needed. In reality, though, things are more complex:
- Adding storage into existing scale-up solutions is disruptive and exposes risk. Better the vendor pre-populates the hardware with storage and simply turns it on as required with a licence key. What happens if the customer wants to scale down? It’s tricky if not impossible.
- Capital exposure – placing storage on the floor in advance is expensive for the vendor. Hardware (especially storage hardware) depreciates quickly and the price of capacity six months down the line is hard to predict. The vendor therefore has to build a forward pricing model to cater for this.
- Capital exposure again – if storage vendors do a true “capacity on demand” model, then hardware has to be priced as an operational expense, however this creates accounting issues for the vendor in recognising revenue (sometimes not allowed until the end of a deal). The solution can be to put hardware through a leasing company – again, complex.
On-demand, “pay as you grow” type deployments never really took off for the above reasons and also because vendors could come in and buy back the existing hardware as part of a new deal, using power/cooling/space savings to help justify the replacement programme. The old hardware was easy to re-use, because it had many bespoke components that could be deployed elsewhere as parts spares.
Today, storage is implemented differently. Almost all vendors are using commodity components based on standard Intel servers that after three years have little residual value. Solid state disks (SSDs) have a finite write lifetime and no-one is going to want to re-use second hand SSDs, even if the vendor warrants them, as the failure rate would expose more risk. It’s also worth remembering that new SSDs have a greater endurance than people expected, so keeping an array for five to seven years is no longer a big deal.
So how does that impact what SolidFire are doing? FlashForward is still a CAPEX offering (so no accounting implications), with software packs purchased as perpetual licences plus maintenance. The software pack can be applied to any SF series appliances, allowing hardware to be upgraded and amortised independently of the software. As an example, in our breakout discussions, the figures of three to four years for hardware and up to seven years for software were referenced.
As a customer you can see the benefits; buy software licences for capacity as they are required and buy/refresh hardware over time to meet physical capacity requirements. Software packs are measured on provisioned capacity – that is, the size of LUNs created and presented to the user, not on physical storage – and are independent of any data optimisation (i.e. compression/dedupe). So if dedupe rates are low, a customer just buys more hardware but doesn’t pay anything more for the software to go with the hardware. If dedupe ratios are good, then less hardware is required.
There are other benefits too: hardware can be purchased and racked ahead of time and software licences purchased only as needed, allowing data centres to be pre-seeded with hardware.
The “elephant in the room” here is why SolidFire would need to change their purchasing model in the first place. Both Pure and Kaminario’s offerings are slightly different and don’t separate the licensing from the hardware. I think we have to look wider at what’s happening outside of traditional vendors.
There’s now an influx of solutions such as Red Hat, Ceph, Hedvig, DataCore and Datera where storage software is purchased per TB and the customer brings their own hardware. These offerings want to make it easy for storage to be purchased on simple $/TB metrics, with the specifics of the hardware down to the customer. If they become popular (and it’s a big if), then traditional vendors will have serious problems competing and making the accounting work.
The Architect’s view
Providing more purchasing options is a good move for SolidFire; customers can now purchase appliances and software together, appliances and capacity licenses separately or software on its own (ElementX). This provides real flexibility for the customer, depending on their circumstances.
We should also look to what this means for the other products in NetApp’s portfolio. Will they also move to this model? Data ONTAP Select (the new name for Edge) could easily be purchased in this way and perhaps over time the traditional ONTAP platform will be too. But that’s just a guess. In the meantime we can wait and see how successful the SolidFire model is by the Analysts’ Day next year.