Notes of caution
He is more sceptical though when it comes to the proprietary and enclosed nature of today's hyper-converged packages. “A hyper-converged box is great for primary storage, assuming you have the right features built in – add a node and it expands your storage pool, but that's still primary storage,” he says. “You don't store backups on primary storage, so who is providing that additional secondary storage and how does it plug into that converged framework?
“Right now I haven’t seen anyone come to market with a hyper-converged platform and say 'Here's a storage node that just plugs in and provides better backup.' It's more typical that you’ll have Exagrid or Data Domain, say, and that's where you have to break out of the hyper-converged box and do things differently.”
Seth Knox, product marketing veep for storage specialist Atlantis Computing, sounds another note of caution. Sure, he says, hyper-converged systems run on industry-standard x86 hardware and they automate the provision of storage to VMs as they are spun up, also as part of an automated process. But they only scratch the surface of what is possible with either storage virtualisation or SDS as a whole.
“Hyper-converged is when you're using intelligent software to present enterprise-class storage from local disk,” he explains. “That dramatically affects cost, because the biggest cost element of something like EMC VBlock is the enterprise storage. But I see SDS as a superset of hyper-converged – hyper-converged is just one of the possible storage architectures within SDS. They overlap, but SDS does more. With SDS you could pool and share SAN, NAS, local disk and even local RAM. Hyper-converged limits you to what's built into your x86 platform.”
Beyond converged: new ways of thinking
The ability to pool everything is the story that storage virtualisation pioneers such as DataCore and FalconStor have been telling for more than a decade. Does this mean that the mainstream has finally caught up with them, or is there more to it than that? A bit of both, it seems – as well as the storage virtualisation layer, another key element in the hyper-converged story is the unified management and automation layer that sits atop both storage and compute.
Simplivity solutions architect Stuart Gilks argues that this layer is what takes hyper-convergence beyond converged or integrated infrastructure packages such as VBlock or NetApp's FlexPod. “Converged system initiatives have paved the way, but a genuine single shared resource pool is the differentiator. It's to do with efficiency – you can bring a lot less complexity and lower operating cost,” he says, adding that a single pool can support both local and globally-aware data de-duplication, for instance. “In a converged system you had a single package to buy, but it was still discrete components. With hyper-convergence, it's that single pane of glass, that single global point of management.”
“Hyper-convergence and bringing compute and storage together is only step one in transforming the data centre,” adds Ursi. “Step two is making it web-scale. It's making sure everything runs in software and in a hypervisor-agnostic way. In IDC studies on hyper-convergence, they predict that within two years, over 50 percent of enterprises across all sectors will use some form of hyper-converged infrastructure to run their VMs.”
“Hyper-converged is a category that integrates a lot of data centre infrastructure, it's like an ecosystem,” says Knox. “It's worth it for the single point of management alone. That has a lot of value, independent of the automation element, but automation also requires some new ways of thinking about IT.”
That last point is extremely important, because hyper-convergence is not simply a technology issue. Just like its not-so-remote cousin, cloud computing, it will also drive a rethink of how IT is organised and how applications and services are delivered. In particular, by abstracting, automating and converging the storage, it can orientate the object of control more closely to the business, so it is the VM that maps to the business, not the storage that maps to the business.
“Hyper-convergence is driving our customers to examine what they're doing now and how they're doing it,” agrees Dave Leyland, who heads the next generation data centre business unit at IT service provider Dimension Data. “As an industry, we have created some really big and horribly complex IT infrastructures, and at scale, most data centres are not wildly efficient.”
He says that like the cloud, hyper-convergence introduces new and different consumption models for services, joking that “historically we have had to buy IT like children's clothing – you buy it two sizes too big, and probably you buy two of everything so there's a spare!”
“It is a huge opportunity to drive efficiency and agility,” says Gilks. “The move from best-in-breed infrastructures is still in a fairly early stage, but it requires a rethink of your IT infrastructure. For example, if your IT department has a traditional team structure, you need to think differently for hyper-converged. It's perhaps a more natural adoption for mid-size companies, where IT is probably more generalist anyway.”
He continues, “One thing I hear from customers is how hyper-converged has the potential to simplify their planning. They typically have different refresh cycles in each area, and hyper-converged allows them to budget-proof, reduce the number of building blocks, and expand incrementally. Now migration can be done non-disruptively at the VM level.”
So is hyper-convergence the reinvention of the minicomputer, as some have suggested, or is it the ultimate private cloud? In many ways it is both. It is the latest swing from distributed to consolidated computing, but it is also consolidated in new ways, abstracting and insulating the hardware – compute, storage and network – from the applications and services. And done right, it has the potential to change the relationship between business and IT just like PCs did in the 80s and 90s. This time though, IT can and should be ready for it. ®