This article is more than 1 year old

With the cloud, letting your emotions run over may mean the costs do too

Time to take things cool and slow

Sponsored It’s understandable that people get excited about the cloud. As well as the prospect of virtually unlimited compute and storage, it promises relief from the frustrations of maintaining legacy software and infrastructure.

But frustration, excitement and relief are all emotions. And emotions may cloud your judgement about the true benefits – and costs – of moving to the cloud. Or about the fact that in many cases it might make far more sense to keep your workloads partly, if not entirely, on-premises.

Although some organizations are truly completely cloud native – in that they were born in the cloud era and have always existed in the cloud – most organizations will have legacy infrastructure and software that cannot easily be discarded.

Refactoring an existing application for the cloud can make for an arresting conference topic, but an arduous reality. When it comes to existing enterprise applications, there is the very real possibility that applications were written in what are now niche languages, or in line with programming models that newer staffers will struggle to make sense of. That’s before you even decide which cloud you’re refactoring for.

It is better, perhaps, to view the journey from traditional three-tier architecture to the cloud as a continuum, where specific pain points and benefits become apparent at different points on the journey.

The first stepping stone towards an infrastructure that benefits from the cloud then can be to implement hyperconverged infrastructure (HCI) on-premises. With the right choice of platform, or more specifically, the right virtualization platform, infrastructure silos such as legacy SANs, can be eliminated along with their associated management tools and headaches.

Crucially, again assuming you’ve made the right choice of HCI platform, this can provide the foundation for an enterprise-wide private cloud infrastructure. Storage and compute are abstracted into software and consolidated into a common pool, opening up the possibility of increased automation and self-service provisioning, for example, of resources for test-dev. Improving agility and scalability becomes a question of easily adding capacity, rather than embarking on a tortuous upgrade.

Maintaining multiple management and configuration tools is grossly inefficient. A single management interface frees technical staffers to do work that is more rewarding – for them and the organization – while developers can concentrate on delivering applications that both delight customers and benefit the business.

Your legacy, virtually

All of this can result in dramatic reductions in infrastructure costs, both in terms of the raw hardware and the costs of IT staff and of unplanned downtime. In IDC’s study of Nutanix cloud platform installations, for example, participants reported that deploying new storage was 82 per cent faster and required 85 per cent less staff time. Installing new physical servers took 1.2 days, the analysis found, compared to 2.5 days previously, while the time needed to deploy a VM reduced by 43 per cent.

All the while, legacy apps and associated data can be preserved in their original state, atop the virtualization layer, without the challenges of refactoring and migration.

Once an organization has embraced private cloud and seen the benefits in terms of agility as well as TCO and ROI, it’s inevitable they will begin to look at the public cloud. The transition via HCI to private cloud highlights the benefits of scalability, flexibility, and self-service, along with more efficient use of resources, So, on the face of it, the public cloud is going to offer this to the power of n. This is why cloud mandates may come from on high, with excitement over potential cost savings - but little consideration of the true challenges involved.

And these challenges are very real. What may on the surface seem like a straightforward lift and shift to the public cloud can become a months, even years-long process of refactoring applications for providers’ particular APIs and tool set, swapping one load of frustration and toil for another.

And when the migration to a public cloud is complete, possibly even before, there is the specter of vendor lock-in. Other providers may, in time, offer lower costs, or emerge as a better option in terms of compliance, or redundancy. But switching providers then involves another round of refactoring and migration costs.

If the move to public cloud is a partial one – whether as a waypoint on the journey to complete cloudification, or because the organization is spinning out specific workloads or responding to specific spikes in demand – there is the question of how to manage your mixed infrastructure? Does your carefully integrated on-prem setup now have the added burden of another management interface or set of tools? Critically, how do you secure your infrastructure, when it’s both on-prem, and on the public cloud? That is, how do you secure your hybrid cloud?

In the case of Nutanix, the code base that underpins its Nutanix Clusters offering, currently available on AWS and soon to support Azure, is the same as its on-prem enterprise cloud offering. Integrating an on-prem installation with an Amazon account takes under an hour, according to Allan Waters, Solutions Marketing Manager at Nutanix.

“Why not have an infrastructure to support both?” he says. “Because you never know. Once you toss something up on AWS, can you get that back down? It's going to be pretty darn hard and expensive.”

The biggest problem with public cloud may not make itself apparent until the first bills start dropping into your mailbox.

On the face of it, public cloud costs will look appealing. Customers can scale up their infrastructure without the capital cost of actually buying compute and storage, or the supporting infrastructure. The cost of supporting a virtual machine is measured in pennies per hour.

Different cloud, same tools?

However, few organizations will be using just one VM. Moreover, resources are typically made available in fixed ratios of the various components, e.g., CPU, storage, memory. This is what Nutanix Senior Product Marketing Manager Sahil Bansal describes as T-shirt sizing – from XS through to XXL and beyond.

So, to ensure they have the minimum of a given resource for a virtual machine, the customer is potentially paying for some overhead. Typically, in the public cloud this excess cannot be redirected or shared with other VMs. Tim McCallum, Director of Customer Success-Finance at Nutanix, has dubbed this portion of unused resource “microwaste”. It might look inconsequential when looking at an individual virtual machine or process, but in aggregate, the costs mount up alarmingly.

Add to this is the fact that public cloud providers do not have a vested interest in making it easy to turn the lights out. Individual developers and testers may be powering up multiple virtual machines a day, while customer-facing web properties may be configured to the most optimistic traffic forecast. But who ensures those resources are then terminated when they have served their purpose or scaled down when the true traffic level manifests itself?

Nutanix’s solution to the first problem – via Nutanix Clusters – is to use AWS EC2 bare metal cloud instances along with their HCI software. Just as Nutanix abstracts storage in HCI, a hybrid and multicloud strategy implemented with Nutanix Clusters effectively abstracts the cloud. The result of running Nutanix HCI on AWS EC2 bare metal instances is that the customer can pack a lot more workloads into that resource, and so greatly reducing total cost of ownership.

As Bansal puts it, “We can come in and we can install our software and then our customer is in full control of how they virtualize their resources from that hardware.”

And those workloads can run on the same software foundation as they do on-prem, managed using the same tools, because of that common codebase.

As for the second problem, Nutanix Clusters provides a Hibernate and Resume function (currently in a tech preview), which shuts down inactive clusters while saving the associated data to – in the case of AWS installations – S3 buckets, to be reactivated later if needed. The freed-up resources can be reused by other clusters.

Using the same platform on-prem and on the cloud also makes for a far more seamless failover in disaster recovery/resilience scenarios. As Bansal says, “If it's not something that is going to happen all the time, then why bother about managing the datacenter yourself for something that's infrequent. Use the benefit of cloud elasticity for those infrequent use cases.”

Of course, the cloud is a marketplace, and it seems natural that companies want to move their workloads to whatever cloud gives them the best value, or to enable them to meet changing regulatory and compliance obligations. However, the more tightly your infrastructure is tied to a given public cloud provider, the more you will face those same issues of tricky migrations, retooling, and fractured tool chains.

And so, the fourth point on the spectrum is a hybrid multicloud approach.

Again, if you’ve chosen the right underlying platform, you should be able to use common tools to manage your infrastructure, and transparently move applications and workloads to wherever makes most sense, whether the reason is financial, regulatory or resiliency related. Nutanix Clusters is expected to support Azure soon, offering users the chance to take full advantage of hybrid, multicloud, all with the same unified management stack.

“If there's a workload that runs best on AWS, move it up there, if it deserves to be on-prem keep it on-prem,” says Waters. At the same time, if Azure lowers its pricing, and “a workload is going to be cheaper over there, you can literally move that workload over in a few hours.”

There’s no doubt that struggling with legacy three-tier architecture can be challenging. But so is having to execute a mandate to embrace “cloud first” which overlooks the actual migration and management challenges involved.

But by making a clear-headed analysis of the options available, and mapping a realistic path through them, it is possible to develop a strategy that delivers benefits for your infrastructure people as well as your software people, and the business as a whole. But there’s no need to get emotional about it.

Learn how you can use Nutanix Clusters to connect your private and public clouds and migrate VMs across clouds with no retooling with a free test drive.

Sponsored by Nutanix

More about

TIP US OFF

Send us news