This article is more than 1 year old

Cloud is here to stay, but customers are starting to question the cost

'Hyperscalers made it sound like it was all self-service, in reality it was not'

Feature Cloud-based infrastructure services date back at least as far as 2006, when AWS introduced its S3 storage platform, followed by Elastic Compute Cloud (EC2) instances. Since then, cloud has become a global industry topping $100 billion in size, but some customers have begun to question the move to these services and started to bring workloads back in house.

The cloud market has in fact been so successful that it has proved largely resilient to the global economic factors that saw some tech companies report record losses this year. The rate of growth in cloud spending has slowed - from 20 percent in Q4 2022, to 19 percent in Q1 of this year, to 18 percent for Q2, according to Synergy Research – but it has continued to grow.

Many enterprises were reluctant to buy into cloud in the early days, citing concerns over security and loss of control over the infrastructure that they rely on to operate key applications and services.

Those days are past, as can be seen by more figures from Synergy, which indicate that enterprises were spending over $80 billion per year on their own datacenters a decade ago, with less than $10 billion going on cloud infrastructure.

Since then, spend on datacenters has increased by an average of just 2 percent per year, while that on cloud has grown by an average of 42 percent per year, reaching $120.3 billion in 2022.

So cloud-based infrastructure has become just an accepted part of the way organizations operate their IT these days, and that isn't likely to change, but perhaps some of the shine has come off of the attraction of the cloud in recent years.

Some companies have found that, far from saving them money, operating applications and services in the cloud can be just as costly as owning and managing your own infrastructure for this purpose, and sometimes more.

Earlier this year, this was demonstrated by one company which calculated that keeping its infrastructure on-premises, rather than using Amazon Web Services, would save it $400 million over three years, as The Register reported.

Another example is Basecamp project management developer 37Signals, which decided to ditch the cloud and go back to on-premise infrastructure after being presented with a $3.2 million cloud hosting bill.

Part of the issue seems to be that cloud is not easier to manage than looking after your own infrastructure, but instead presents a different set of management challenges.

For example, there is what used to be called server sprawl, where users spin up a new virtual server for every new project, then forget to shut them off when they are no longer required. Even if they are unused, those instances will still be metered by the cloud provider, clocking up unnecessary expenditure.

In a Forrester Consulting report commissioned by cloud software outfit Hashicorp, it was claimed that 94 percent of decision makers and practitioners surveyed said their organization experienced one or more types of avoidable cloud spend.

Over-provisioning of resources and idle or underutilized resources affected half of all respondents, with lack of skills or not having the right capabilities to manage resources blamed.

This issue was also highlighted at the end of last year by CAST AI, which has something of a vested interest as it develops a platform to monitor customer use of resources (in this case Kubernetes clusters) across the three major cloud platforms – AWS, Microsoft Azure and Google Cloud.

It claimed that organizations on average provision a third more cloud resources than they end up using, with organizations blaming a lack of visibility into their cloud usage as the main reason for this.

The CAST AI platform provides free analysis for organizations to determine how their cloud resources are provisioned, while paying customers have the option to let the platform take remedial action based on its findings.

Those actions can include freeing up unused resources, or moving workloads to spot instances – virtual machines that take advantage of unused excess capacity – which can deliver savings of up to 60 percent, the company claimed at the time.

Cost pressures are forcing organizations to look more closely at their cloud investments, according to Everest Group. It claims cost has now risen to the top concern among cloud customers, and 67 percent of enterprises surveyed say they are not seeing the expected value from the cloud.

Everest's Cloud and Legacy transformation leader Abhishek Singh told us that this can be down to the way organizations are using the cloud, and that the cost benefits are clear for operating cloud-native applications.

"For legacy or complex enterprise workloads, enterprises are realizing that simply lifting and shifting applications from on-premise to public cloud is a bandaid, not a permanent fix," he said. "Containerizing applications and porting them to cloud can help you solve the infrastructure question, but leaves the modern application architecture question still hanging," he added.

The whole cost question "is probably the worst kept secret of enterprise tech," Singh claimed. "Cloud is NOT cheaper and does not rid you of redundancy, the two cases for public cloud we were all convinced about 10 years ago."

And one of the big issues is that complex cloud environments mean multiple lines of spend can make observability and change management difficult, leading to billions dollars of commitments around cloud consumption simply not materializing into a return on investment, Singh said.

"Hyperscalers made it sound like it was all self-service, in reality it was not - as can be seen in the thriving businesses that system integrators (Accenture, Deloitte, PWC, TCS, Infosys) have made out of it," he commented.

This is reminiscent of software licensing, where an entire eco-system of software asset management providers sprang up to try to make money by demystifying the byzantine process of ensuring software compliance, while dangling promised savings in front of customers.

Meanwhile, AI is now changing the cloud infrastructure landscape, especially since generative AI and large language models (LLMs) grabbed the attention of so many enterprise decision makers recently.

A recent report from Omdia claimed that investment in infrastructure for AI model training is now the top priority among datacenter operators. This means buying in high performance servers outfitted with costly GPUs from the likes of Nvidia, which is displacing funding that would otherwise be used to refresh existing server fleets and investment in other new projects.

"One of the big questions around GenAI is the cost of running it," said Singh. "Currently, it is Nvidia and the cloud hyperscalers who are the only ones making money."

"But once the cost equation is optimized, the other question for enterprises will be: do I keep my privileged data on cloud or bring it back and run the models in walled-garden environments? That is the future controversy that cloud players need to deal with," he said. ®

More about

TIP US OFF

Send us news


Other stories you might like