The glamour news in data centres for the past couple of years has been all about all-flash arrays, with converged systems providing a growing backdrop. The flash arrays provided hot performance while converged systems (CI) led by Dell EMC's CPSD (Converged Platforms & Solutions Division – formerly VCE) and Cisco/NetApp's Flexpods provided traditional virtualised server, shared storage array and network systems in integrated racks of components. Customers took to these because they were simpler to order, deploy, operate and manage than buying the three groups of components themselves.
In the background so-called hyper-converged infrastructure (HCI) systems were being developed, led by Nutanix and SimpliVity, that provided scale-out and virtualised server nodes with locally-attached storage and networking. Clusters of these nodes had a virtual SAN built from the component servers' locally-attached storage. The systems were ordered with single SKUs and made a virtue of being simpler and easier to order, deploy, operate and manage.
VMware's VSAN became the default HCI storage and all the main server/storage system companies entered the market; Dell, EMC, HDS, HPE and IBM. This year hyper-converged emerged as "the" hot data centre topic, and we're taking a look at why, and how things might develop, by talking to several authorities on the topic.
Reasons for moving to HCI and adoption pattern
Why are customers moving to HCI? The 451 Group's founder and Distinguished Analyst John Abbott says. "The most often-cited benefits of moving to hyper-converged infrastructure (HCI) are: fewer systems to manage; improved scaling model; improved agility; VM centricity; and improved performance. As for drawbacks, customers sometimes question the fixed scaling model (buying compute and storage together), potential architectural flaws that arise from co-mingling servers and compute in a single unit of infrastructure, vendor lock-in, and organizational resistance from IT staff."
Chad Sakac, President of VCE, EMC's converged platform division, points to the consumerisation of IT and IT being seen as a service: "Hyper-convergence is in the zeitgeist because of changes in how technology is perceived. The consumerization of IT and its focus on end capabilities are abstracting the service levels and service components from the underlying component systems.
"The conversation is about how can I efficiently and economically deliver the needed service level,” he continues. “IT service levels used to be a component level conversation (hard disks, stripes, cache sizes, etc…) to build solutions but consumer driven environments do not have an appetite for deliberating about underlying components. Just like the underlying storage and file system of a smartphone are rarely in a consumer’s mind, the same is becoming true for the underlying parts of IT systems. This change in perception makes hyper-converged architectures, which consolidate servers, networking and storage into a single unit, highly attractive to service oriented CIO’s."
How and where has HCI been adopted? Abbott reckons, "Initial HCI implementations mostly went into small and midsized businesses with fewer than 1,000 employees, into remote offices of larger organizations, and into some enterprise departments. More recent evidence suggests that adoption of HCI products is growing among larger organizations.
"VMware, for instance, notes that adoption of Virtual SAN is divided roughly equally between small, medium and even large organizations with more than 5,000 employees. Pivot3, meanwhile, says half of its customer base is mid-sized organizations (1,000-5,000 employees), with around a third in large enterprises.
"SimpliVity’s customer base is split roughly equally between small and medium/large businesses. Nutanix doesn’t break down its customer base in this way, but claims the majority of deployments are in core enterprise data centres."
Abbott adds, "Among smaller and mid-sized organizations, we see a strong preference emerging for HCI products delivered in a pre-built, appliance-based form factor. The HCI market today is comprised chiefly of those suppliers that run on VMware and those that support KVM. Though many HCI vendors have a longer-term plan to support multiple hypervisors, initial adoption of HCI is overwhelmingly on the VMware platform."
Tony Locke, a Distinguished Analyst at Freeform Dynamics, says, "The reasons behind ‘why now’ are many and varied. The biggest reason is that the technology itself, or should I say the technologies themselves and their packaging, are now reaching a level of maturity where they are becoming usable by mainstream organisations rather than just being something in which you need to invest significant time and effort to get usable systems. Getting supported solutions from mainstream vendors is the big tipping point for widespread adoption, and this is just starting.”
Marc Logen, Senior Cloud Consultant at Citihub Consulting, thinks CI problems are something to do with it. "Converged infrastructure solutions have led the virtualisation market for many years. However, tougher market conditions have led to the market demanding cheaper entry costs, simpler and more scalable solutions. Converged solution vendors have not been able to provide an answer to these requirements and thus hyper-convergence has been born to fill this gap."
Abbott says, "Getting rid of [customers’] storage networks and premium-priced arrays seems like a natural next step. Hyper-convergence has also come to represent a means for IT organizations to move towards a ‘new style’ of IT – an alternative to both the traditional infrastructure components of separate servers and SAN-based storage, as well as other forms of integrated systems, and sometimes as a step towards private, hybrid or public cloud services."
Hyper-convergence technology advances
What are the developments in technology that have made hyper-convergence possible and popular?
"It’s a mix,” says Locke. “Virtualisation and storage pooling elements being two with management scheduling sitting above being perhaps the most important recent developments."
Logen picks two. "Software and automation are the magic sauce for hyper-convergence."
Abbott thinks x86 advances take a lot of the credit. "Intel’s server-based processors are now fast enough and inexpensive enough to run both computing tasks (including server virtualisation) and storage-related CPU activities simultaneously."
"With a few exceptions – SimpliVity, for example, offloads its data de-duplication processes to a dedicated, PCIe-attached FPGA – most hyper-converged platforms run processor-intensive storage services, including data optimisation services, in the server CPU."
"This would not have been possible a decade ago, but standard x86 chips now include additional instructions that improve performance in hashing for de-duplication and encryption, and improved I/O capabilities to a point where processors can keep up with 10 Gig networks and still be able to do useful work."
He adds, "Another technology enabler is the emergence of NAND flash as a cost-viable storage layer in the data centre."
Sakac agrees with the stronger x86 server point but adds in software-defined storage and fast networks. "Just as virtualisation has abstracted compute functions from the underlying hardware, software-defined storage has begun to decouple data management functions from purpose built storage arrays. At the same time, CPU and storage technology are getting denser, allowing more compute and storage power into standard server footprints. "These combine to deliver software-powered capabilities to address the enterprise requirements while meeting consumer requirements through use of industry standard components. And lastly, the rapid adoption of high speed networks to connect all of these components together have allowed hyper-converged systems to support large scale deployments."