Here’s a key benefit of that shiny new hyperconverged box you just bought: it’s supposed to speak the cloud’s language.
After all, hyperconverged storage is sometimes viewed as a private cloud in a box, melding storage, networking and compute into a single package with the storage management happening under the hood.
It offers the ability to provision new resources and control them via software APIs.
That sounds a lot like the public cloud, then, only in a rack somewhere in your data centre. In theory at least, that opens up the possibility for an on-premise hyperconverged box to talk to public cloud services like Azure and AWS. But however push-button hyperconverged kit is supposed to be, rolling it into a hybrid cloud environment is going to take a little effort. There will be speed bumps along the way.
What pressures will you face, and what skills should your team have to safely navigate them?
Jeff Kato, senior analyst and consultant at tech advisory firm Taneja Group, has never seen on-premise vendors scurry so much to make their kit easy to use. “They know they’re competing against public clouds, so now they have to make on-premise infrastructure as inexpensive and easy to use as public cloud,” he said.
The next step is for the two to integrate, he said, and indeed, we’re already seeing many signs of this across the industry.
Just bursting for a bit of the cloud
Not all hybrid cloud models are created equal, though. When it comes to support for a hyperconverged infrastructure, cloud bursting is perhaps the most irksome, no matter what the vendors might tell you. It involves offloading compute and storage from on-premise kit to public cloud infrastructure when your local infrastructure feels the strain, but it’s a sticky task, warns Tony Lock, distinguished analyst at Freeform Dynamics.
“Bursting sounds like a great idea, but it’s not that easy to do,” he says, arguing that it’s easy to overlook the management of the data and the processes behind it. “It’s about security, charging and budget control. It’s the whole typical thing of getting IT to run practically. You need to manage all of those things together, and so that comes down to orchestration.”
Giorgio Nebuloni, associate research director for European systems and infrastructure solutions at IDC, argues that most companies don’t have bursting on their radar today. It makes sense, because hyperconvergence is still in the early adoption phase right now. Companies are still playing with the boxes and seeing what they can do.
Hyperconverged equipment is also generally purchased for discrete projects focusing on specific workloads, rather than as general-purpose computing solutions, analysts say. Just under a quarter (23 per cent) of the IT pros that 451 Research interviewed for its March 2016 hyperconvergence report said that they were using hyperconverged infrastructure products.
Building on disaster
Another hybrid use case sees companies planning their use of cloud resources more strategically rather than using it as an ad hoc overspill for storage and compute requirements.
“This is where you have the client logic on site and the business logic in the cloud, which is one I find far more attractive,” said Clive Longbottom, founder of tech analyst firm Quocirca. This can work well in environments where demand for the front-end interface is less volatile, while the heavy lifting at the back end can fluctuate. Longbottom pegs VDI as an example - just as long as you can get past the daily morning boot storm.
That’s all very nice, but how about starting with some training wheels? Focus on simple disaster recovery for backup purposes first, suggests Taneja’s Kato.
“For a lot of people, one of the hot items is doing DR to the public cloud. That’s a low hanging fruit,” he said, adding that relatively simple on- and off-premise collaborations like this build capabilities for more sophisticated hybrid scenarios in the future.
Just backing up data into the cloud still leaves you with the challenge of sucking gigabytes or terabytes of data back down the pipe, should your on-premise infrastructure throw a wobbly. Failover scenarios present an intermediate stepping stone into more sophisticated hybrid cloud solutions, Kato said. If companies can run storage software in the cloud, they can experiment with copying apps into the cloud too.
Hyperconverged solutions are better for this than traditional infrastructure, Kato believes. “Because their software is based on distributed software defined storage, it’s much easier for them to port that to the cloud,” he says.
Software-defined means software-controllable, and programmable infrastructures are a big part of the hyperconvergence story, says Justin Giardina, CTO of hosting company iLand.
“What hyperconvergence means to us is that we can program pieces of metal and blades and servers in the exact manner that we need to fit a certain use case for our infrastructure,” he says.
A programmatic future
On-premise hyperconverged equipment promises to bring the same kinds of APIs to customers’ on-premise datacentres, offering operations staff the chance to manipulate both on-premise and off-premise assets in the same way.
“It’s really starting to blur the lines in how my sysadmin programs our infrastructure,” Giardina says. “You can have a common set of APIs that let me provision my hardware and my hyperconverged stack in the same way as the cloud, and that’s really huge.”
It’s a utopian view, but it’s going to take some work to get there. If hyperconvergence is still in the early adoption stage, then the infrastructure-as-code concept is embryonic, warned Freeform Dynamics’ Lock.
“The software-defined datacentre is one of those things that when we talk to IT pros, they say ‘It’s a lovely idea but I haven’t seen much evidence of it yet’,” he tells us.
Giardina agrees. He works with customers on this stuff a lot, and the APIs for managing cloud services and hyperconverged kit simply aren’t unified yet. We’ve gone from server sprawl to VM sprawl to API sprawl, he reckons. The APIs are REST-based, which makes them easy to write to, but a lot of the work is still customised.
“We’re already starting to see the emergence of third-party platforms that can consolidate these APIs and manage them in a single interface.” The likes of Puppet, SaltStack and Chef are all helping to bring these APIs together, even if it’s a work in progress. “The idea is that with a common middleware layer I can program something in a common framework and be able to leverage all of my ecosystem,” Giardina says. “But we’re not 100 per cent there yet.”