This article is more than 1 year old

Yes, you heard me – the storage infrastructure WARS are over

Players are still fighting for dominance, though

Sysadmin blog Though there are still a great many players on the field fighting savagely for the right to dominate the industry for the next decade, I believe the storage infrastructure wars are already largely over. With so many startups still entering the storage space, and so much money flowing around it seems like I'd be mad to do so, but let me lay out my reasoning.

When I say that I feel we've hit the end of the end of the storage infrastructure wars, I should very carefully define what I mean. To my mind, the storage infrastructure wars have been about the question: "Where will we store our data?". The "how will we manage our data" portion of the equation is still very much up in the air.

"Where will we store our data" breaks down into a few different basic categories. The first is "on an array", the second is "on the server using server SANs or equivalent", the third is "object storage clusters" and the fourth is "the cloud". Also, when I am talking about "our data", I am specifically talking about "the data which makes up live enterprise workloads".

Bulk data

Object storage has already completely trounced every other player for the storage of bulk "cold" data, like Facebook's creepy undeletable picture collection or the unlimited number of old files enterprises have to keep functionally forever thanks to various regulations. I expect to see some of the latter example move into "the cloud" – at least for American businesses – but this is really just outsourcing your object storage to someone else.

Arrays and server SANs were never really competing for that market. Once object storage got Caringo-class ease of use, everything else simply became way too expensive to use. I suspect tape will stick around for a while yet, because many enterprises have a huge investment in it, and they don't mind access times dragging into "minutes" for certain classes of data. I do, however, expect this to go away fairly shortly.

The truth of the matter is that enterprise workloads are moving from disks to flash, and from SAS to PCI-E. They're going faster, lower latency and it is raising expectations of what's possible. We wouldn't tolerate a system that took a half an hour to boot in today's world, and we're getting increasingly dissatisfied with the time it takes to pull data out of archival storage.

Disk vendors need to sell disks. They aren't about to be driven out of business by flash flingers, especially since there isn't enough fab capacity to meet the demand for the entire world's data anyways. (And there probably will never be, truth be told.) So disk flingers are moving towards displacing tape and providing a lower latency access for cold storage…and that is occurring mainly through object storage setups.

Enterprise workloads

This brings us to enterprise workloads. All anyone cares about in the storage world is enterprise workloads. This is what you can charge the big money for, get the big margins and build entire empires upon. The object storage types have repeatedly tried to commoditise storage for enterprise workloads but to say the results have been lackluster thus far would be stretching the concept of understatement to its breaking point.

Object storage clusters can't deliver the raw speed or the latency that modern enterprise workloads demand…not unless the workload happens to be running the same node as the storage. Enter server SANs.

Server SANs drive down the cost of providing enterprise storage functionality, allowing for commodity servers with local storage to be used. Just like arrays can't touch object storage for cold bulk storage on price, they have a very hard time matching server SANs on price for enterprise workloads.

Now, the price disparity between arrays and server SANs is not nearly as great as between arrays and bulk object storage clusters. It's close enough that clever marketing, "preferred vendors lists", lunches with CIOs and even a little bit of unsubtle fearmongering could probably keep a good server SAN company down. Price alone isn't enough to make server SANs win.

What is making server SANs win is that they bypass the need for the storage team. Let me be absolutely, crystal clear what I mean when I say this.

There are only about 17,000 enterprises in the world today. Over the past year I have personally talked with either the virtualisation teams, the storage teams or both at over four hundred of them. In every single company the virtualisation teams are trialling – or have already deployed to production – server SANs.

In virtually every instance the reasoning was essentially the "DevOps" argument: the virtualisation guys don't have time to wait around for other teams to get their act together, they need control over all aspects of infrastructure under one command structure.

It isn't price that's driving server SAN adoption, it's ease of use. That's the single most dangerous thing that can possibly happen to array vendors. Ease of use is not the strength of most arrays, and once you've experienced it, it's very hard to go back. Virtualisation teams are in a new storage ease of use bracket.

Worse, this ease of use is about more than the storage UI. It's about managing storage from the same UI as your VMs. It's about being able to handle VM storage using profiles, APIs, automation, orchestration… and all without having to ask permission or play company politics with another team for budget.

Thus it is that storage teams tell me they are beginning the long process of merging with virtualisation teams. Server SANs have been barely baked for a year, and already they are triggering changes in org charts at some of the largest companies in the world.

Virtualisation teams tell me about "VSAN first" policies, in which arrays are now considered as "storage of the last resort". This is a shift in approach by those who buy servers not just by the rack, but by the row. This is how our industry is defined.

Beyond infrastructure

Array vendors are buying server SANs companies. They are buying up all-flash and hybrid flash companies that have easy to use setups and they are working to be more than just shifters of storage boxen. They can all see the writing on the wall.

Arrays will be around – like mainframe zombies – for decades. Companies have collectively invested billions of dollars in them. They are not going to evaporate overnight. But they will become increasingly niche, just as non-virtualised workloads have become increasingly niche.

The next generation of network designs will be set up to consider east-west storage traffic by default, with north-south traffic moving to the exception, instead of the rule. Software defined networking will make this easier, and the rise of server SANs will in turn drive demand for software defined networking.

The above is a look at the storage infrastructure wars, as I see them… but there's still plenty of fight left in the storage industry yet. Those fights are occurring in "how do we manage our data" areas: global namespace provisioning, data awareness, tiering migration and management.

Object storage gives us the ability to retain petabytes of unstructured data, and server SANs allow VMs to easily sprawl out of control. The next frontier is not about the hardware we store all of this on, but the software we use to make sense of it all, and to move data to the various tiers of storage that make financial sense for them to occupy.

But we're in the early days of that battle. The startups are still proving themselves, and the major players are still seeking to extract money from the infrastructure companies they just spent a fortune on. Nobody's all that eager to jump in on another round just yet, especially since we've yet to see which of these startups have the best algorithms and which can be most easily integrated with existing systems. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like