This article is more than 1 year old
What’s next after hyperconvergence?
Hyperconverged box humming away nicely? Time to take things up a notch
So, you’ve had a crack at hyperconverged architecture. You’ve bought your cloud-in-a-box solution from Nutanix, VMware or whomever and tried it out on a pilot project - something manageable and discrete that didn’t interfere with the rest of your architecture too much. And now that you’ve dipped your toe in the water, you’d like to wade in a little further.
This cloud stuff has its attractions, and those using hyperconverged infrastructure (HCI) will have sampled them. HCI collapses compute, storage and network resources together, creating a single, pooled storage resource that can be distributed between your different workloads as necessary.
There are several upsides to this. The first is elasticity, both in storage and compute power. If you need to shift compute power to specific applications, hyperconverged infrastructure can handle it. If you need to allocate more storage to a workload for a while, then the same applies.
The second benefit is storage performance. Hyperconvergence brings storage closer to the server, making it faster than traditional storage area networks that introduce latency between server and disk. It’s like returning to traditional direct-attached storage, but without having to tie a physical disk to an individual server and waste its unused capacity.
Finally, hyperconvergence eliminates some of the complexity of the SAN by using software-defined storage (SDS) to manage the whole tangled mess for you. It black-boxes your pooled storage for the most part. You may have to roll up your sleeves occasionally for some storage configuration, but the SDS software hides the storage spaghetti from you.
Of course, some platforms are more mature than others. But, assuming your hyperconverged appliance is humming along nicely, you may decide that you’d like to expand your cloud-in-a-box beyond the box and move to an enterprise cloud architecture. How can you best do that?
Grappling with orchestration
Firstly, let’s define what enterprise cloud is and what differentiates it from traditional IT architectures. “The cloud has to be elastic. You have to be able to share resources in real time,” said Clive Longbottom, founder of tech analyst firm Quocirca.
Don’t be caught in the virtualisation trap, he warned, where you think you’re doing cloud just because you have hypervisors in play. If you’re still manually provisioning and balancing your virtual machines and storage – if you still need a systems architect to make manual decisions about how much storage each workload requires – then you haven’t implemented true enterprise cloud, he argues.
The bit that’s meant to handle this for you, and turn your virtualised architecture into an enterprise cloud one, is the orchestration layer. That’s the ‘software-defined anything’ component that handles storage, compute, and networking.
The orchestration layer is a secret sauce that comes in different flavours. Hyperconverged infrastructure vendors sell it as software on top of a bunch of commodity hardware, while OpenStack implementations base it on Rackspace’s open source project. In an ideal world, it’s supposed to handle everything automatically under the hood, shielding administrators from the manual provisioning jobs that they’d otherwise have to manage.
Orchestration: not a one-click solution
But there remains an art to setting it up, warns Longbottom. Today, a lot of hyperconverged kit is still optimised for specific workloads such as VDI and analytics, although the vendors are trying to push the hardware into more generalised workloads. According to Nutanix, 50 per cent of the workloads deployed on its kit in the last few quarters are enterprise applications such as Oracle, SQL, SAP. The more varied your workloads today,the more likely you are to have to tweak and adjust your orchestration layer to accommodate them.
“You don’t just get OpenStack, stick it onto a bunch of boxes and say ‘we’ve done cloud,’ “Longbottom said. “The broader you take your enterprise cloud venture beyond a pilot HCI project, the more you’re going to have to configure it for different workloads.”
To understand why, compare your enterprise cloud deployment with a public cloud deployment. Public cloud providers capitalise on their elasticity because their scale helps them to absorb peaks in demand. Even the smallest regional public cloud service providers will have at the very least tens of customers, and probably far more. If they run one per cent more computing and storage capacity than their average demand, that’s still a vast amount of resource that they have in reserve – certainly enough to soak up demand spikes from one or two customers.
Conversely, if you run your own data centre’s enterprise cloud deployment at one per cent more capacity than your moving average demand, all it takes is an application or two to hit, say, a 10 per cent demand peak and you’ll hit or exceed your maximum capacity, warned Longbottom. Elasticity only works up to a point in on-premise enterprise cloud, after which you must find creative ways to make it stretch without snapping.
You can throw more capacity at your enterprise cloud to allow for most peaks and end up wasting resources, or you can take a smarter approach, Longbottom suggests.
Savvy enterprise cloud design teams will look closely at the workloads they are handling and ask some insightful questions. Are some of them cyclical? Are some of them counter-cyclical? What are their historical peaks? When multiple workloads peak at the same time and there’s a contention for resources, which should take priority? Will those priorities change along with business conditions?
“You still need a few people who are intelligent enough to sit down and understand all of that, talk to the business to figure out what the risk profile of the business is, and ensure that the priority list is in place and then architect that enterprise cloud to fit in with the business need,” Longbottom says. That’s why you can’t just drop a cloud box in.
Hybrid enterprise cloud architectures can also come into play here, enabling enterprise cloud deployment teams to offset their capacity where necessary. Sam Woodcock, principal solutions architect at iLand, an enterprise cloud hosting firm sees companies mixing their on-premise private cloud with his firm’s public cloud offering in various ways. One common use case is for companies to put production applications in their enterprise cloud, but to run their development and test environments in the public cloud, he explained.
Some companies will mix production workloads across the two domains, he added: “They would maybe run certain workloads in clouds locally, and some other workloads on premise.”
One way to architect this is to host back-end data in an on-premise cloud environment while using a public cloud vendor to host the public-facing website that accesses that data, according to Richard Blanford, founder of Fordway Solutions, a managed cloud services, consulting and IT transformation firm.
“You can pattern and match roughly what your SAP is going to do each working day depending on the time of the month and the workload and promotions that your marketing department has put into it,” he said. “What you can’t do is your ecommerce site, or the new digital release of someone’s album that everyone wants to download all at once.” That’s where public cloud can be a useful asset.
If your enterprise cloud deployment features a public/private hybrid then you’ll have to consider how to manage them alongside each other. iLand built its own management interface, designed to mimic the same terminology and metrics as VMware’s, Woodcock said.
iLand’s interface at least makes it easier for cloud administrators (who are more likely to be generalists than, say, specialist storage admins) to map what’s happening between their public and private domains, even if they are viewing them in different windows. For those truly dedicated to "single pane of glass" management, the firm exposes its management services via APIs so that companies can integrate them into the same management tools that are overseeing their on-premise enterprise cloud infrastructure.
What applications to migrate
Companies must decide which workload to move to an enterprise cloud deployment in the first place. The more you migrate, the more efficient your resource usage will be but there will be some applications that aren’t designed to be cloud native. There are companies that make their living from creating cloud environments to support those applications, so that at least they can benefit from shared resources.
Migrating traditional applications to a cloud environment will involve simplifying your operating environment, advised Gunnar Menzel, chief architect for cloud infra services at Capgemini Global.
“You need to standardise and consolidate, and then have a plan that needs to be business-case driven to start listing payloads, servers and infrastructure components from the existing traditional [architecture] to the hyperconverged,” he said.
The portfolio of computing workloads that you want to move to a cloud environment will dictate whether you expand your existing HCI environment or mix it with other, more generic enterprise cloud architectures, said Longbottom. HCI vendors are trying to push their equipment as solutions for more general workloads, but today they’re still largely perceived as optimized for specific application types such as VDI.
“HCI is very good if you have a certain workload that you want to run at an optimized level. If you have ten similar workloads you can throw them all at an HCI architecture and get the benefits of cloud,” Longbottom said. “But if you’re going to put SAP on there, next to a file and print server, next to a video streamer, next to a VDI server, you cannot tune that to serve all of those workloads in the same way.”
You could bolt lots of HCI boxes together, with each one managing a different workload, Longbottom said. Alternatively, a mixture of HCI for specific workloads and more generic enterprise cloud solutions like OpenStack for other, more varied sets of workloads might be a good option for IT departments.
Cost and service models
The technical challenges can be daunting enough, but be wary of the other more business-focused requirements when expanding your HCI project into a broader enterprise cloud, Menzel warned. Make sure you have a financial model for enterprise cloud that accounts for hidden costs, he advises. For example, if you’re sweating assets with no remaining book value in your traditional IT environment and then you move their applications to a hyperconverged system, that will represent a cost.
Then there’s the service model to consider. Shifting your underlying technical architecture to be more elastic and automated creates a platform to deliver IT in a more packaged way. Users can theoretically begin accessing IT as a service, provisioning their own business services, which will in turn provision applications and then underlying infrastructure to support them further behind the scenes.
This is the ultimate promise of cloud computing, but like the underlying infrastructure orchestration it’s more difficult to do in practice, warned Blanford. Customers will end up having to configure a lot of this themselves, integrating services like VMware’s vRealize Automation to add that service-oriented layer.
“They’re all trying to get there, where they’re trying to put the high-level functionality around provisioning, automation, self-service etc, as a front-end to their virtualization and hypervisor management platform,” Blanford said. The more legacy applications you have, the harder that will be to do, he added.
Those IT teams that have experimented with HCI will have taken the first baby steps towards a broader enterprise cloud environment, and will probably have made some important and valuable mistakes along the way. The next leap, to a more broader on-premise enterprise cloud infrastructure and beyond, will take some careful planning – and a design team mature enough to map it all out. ®