This article is more than 1 year old

Hyperconvergence 101: More than a neatly packaged box of tricks

Simplicity is good, but what else can you do?

In a world of complex technologies and unforgiving business environments, simplicity in IT is good.

Technology teams want to get the job done with as little fuss – and as little drain on management resources – as possible. Hyperconvergence promises to deliver that simplicity, but how does it differ from more traditional computing architectures, and how can you fit it into your business, if at all?

Hyperconverged systems are an evolution of the converged systems that brought a new level of simplicity to the market a few years ago. Convergence offers a single hardware platform marrying compute, switch, storage and management, but hyperconvergence (literally, "convergence through the hypervisor"), collapses them together still further.

“The difference is the level of integration with the virtual machine environment and the degree to which the parts are visible,” explained Richard Fichera, VP and principal analyst serving infrastructure and operations professionals at Forrester.

Converged infrastructures combine the server, the storage and the networking into a single product, but it’s effectively just an easier way to package and purchase these things. They’re all still visible, and could be taken out and used separately in principle. They also don’t bind the storage tightly into the mix. It must still be managed by a skilled operative. They’re suitable for data centre modernisation and mission-critical workloads, but they can be expensive.

Conversely, a hyperconverged solution integrates these things far more tightly by tying storage more tightly into the system. They consist of nodes combining storage, compute and a virtual switch. A software-based storage controller pools this storage between multiple VMs and manages the distribution of the data.

“Hyperconverged systems are really a consumption model for software-defined storage,” Fichera said. “By having a software presentation layer, they take each node of the cluster and discover and federate all of the storage on the new node, adding it to the software-defined pool.”

The benefits include more financial flexibility, explains Andrew Butler, research vice president at Gartner. By doing away with the SAN and driving storage back to the compute node, hyperconvergence makes things far more flexible.

“[Customers have] the ability to start very small (two to three nodes) and grow resource at a very granular level,” he said. “So the minimum investment in HCIS can be as low as $20-30,000, whereas a blade/SAN-based system generally requires an investment of $300,000 or more.”

Expanding a hyperconverged system is also a lot easier, said John Abbott, founder and research vice president at 451 Research. “It’s a simple way of delivering compute and storage resources together without needing deep storage administrator expertise,” he explains.

Instead of having to dive in and manage the storage, getting your hands dirty with LUN configuration, the data store (along with the computing resource) simply gets bigger when you add a node to the cluster.

Seeing the world through hypervision

It eliminates the need for a SAN and opens up a new audience for the technology, explains Abbott.

“Hyperconvergence is for a whole new audience that hasn’t looked into storage very much,” he says. “They hadn’t upgraded to a storage area network, because it’s too complicated, so they were just using direct-attached storage.”

This new class of equipment gives these companies without the sufficient storage networking and administration expertise the chance to implement virtualized infrastructures that can run cloud-native apps.

Because the hypervisor manages all of the hardware, it makes it easier to scale a virtualized infrastructure quickly without worrying about the complexity of the SAN or the latency that it introduces. Instead, the storage is close to the virtual machine.

“This allows IT buyers the ability to base their IT infrastructure on a modular and competitively priced building block that can scale out to support larger workloads,” said Eckhardt Fischer, infrastructure research analyst at IDC. “With traditional infrastructure the individual node’s resources need to be individually managed.”

All of this makes hyperconvergence a good bet for many smaller firms, suggests Abbott. Many of them will be able to drop in a hyperconverged appliance with a handful of nodes and handle a good deal of their IT requirement.

This doesn’t mean that large firms aren’t also nibbling away at hyperconvergence, though. “HCIS is also a good fit for point projects within a larger data center, where organizations may want to deploy an appliance approach for new generation workloads like VDI,” said Gartner’s Butler.

VDI has been a mainstay for hyperconverged systems since they launched, for several reasons. IOPs and network latency are a big consideration here, explains Abbott.

“One of the problems with SANs is that you do get that IO bottleneck in between,” he says. The storage area network can become a choke point when trying to serve up lots of desktop sessions, especially during morning and post-lunch boot storms.

“Hyperconvergence is a godsend because you can eliminate a lot of that latency. You haven’t got the physical network,” Abbott says. “All the storage is pretty much where the compute is, so there’s no latency there.”

Next page: VDI PDQ

More about

More about

More about


Send us news

Other stories you might like