This article is more than 1 year old

XaaS is taking over the datacenter and IDC says you asked for it

As customers grapple with pricey AI systems and nontraditional compute, HPE, Dell, and Lenovo circle in wait

Comment There's no arguing that the cloud has changed the way we think about deploying our applications and workloads. It served to normalize consumption-based pricing and gave birth to a slew of as-a-service platforms from legacy vendors desperately trying to keep up with changing customer appetites.

Over the past few years, companies like Dell, HPE, Cisco and others have gotten behind the idea of delivering their equipment, support, and even software as a service, and it appears their efforts were justified.

Whether you call it IT-as-a-service or everything-as-a-service (XaaS), this is how IDC predicts the majority of customers will prefer to buy their infrastructure going forward. According to the firm's latest FutureScape report, by 2026, 65 percent of customers will opt to pay for their IT equipment and services the same way you might lease a car today.

Given the amount of change taking place in the IT industry, it certainly makes sense. Why would you spend tens or potentially hundreds of thousands of dollars on a GPU server just to experiment with a machine learning algorithm that might pay off, when you can rent time on the cloud or lease the same gear from a company like HPE, Dell or Lenovo?

There's a lot of opportunity to be had if you can extract useful insights from the mountain of data your enterprise has been stockpiling. Those that are successful in doing so almost certainly come out ahead of those that can't.

As a result, the fear of missing out on AI/ML will probably do more to drive customers toward XaaS-like services than anything. With that said, the equipment required to extract useful insights from vast quantities of data isn't exactly cheap, especially compared to your run-of-the-mill server. This is no doubt why companies like HPE and Lenovo now offer high-performance compute and AI/ML-centric systems as part of their GreenLake and TruScale platforms: most enterprise customers wouldn't be able to afford it if they hadn't.

The age of general compute is over

How we buy IT infrastructure isn't the only thing that's changing; what we're buying is too. By IDC's estimate, the age of the general-purpose compute server will be on its last leg. The FutureScape report predicts that, within four years, 95 percent of enterprises will invest in application-specific hardware tuned to their workload.

Most servers today fall into three broad categories: general, storage, and accelerated compute. They're all more or less the same. Your workhorse nodes are essentially just a bunch of CPU cores, some memory, and a NIC for networking. A storage server might add some hard drives or SSDs to the mix, while the vast majority of accelerated compute nodes today are GPU based. But to IDC's point, this is changing rapidly as chipmakers target more specialized use cases.

When it comes to datacenter GPUs, Nvidia isn't the only game in town anymore. AMD's RDNA graphics architecture has proven quite capable, and its GPUs now power many of the most powerful supercomputers on the Top500. Meanwhile, Intel has gotten into the GPU and AI game with GPU and AI accelerators of its own.

However, as the accelerated computing scene has become more crowded, it's also more complicated. Many of the vendors don't just have one datacenter architecture but multiple tuned for different workloads. Nvidia has Hopper and Ada Lovelace; Intel has its Datacenter Flex and Max cards and Habana Gaudi and Grecco accelerators; and AMD has its MI200-series GPUs, upcoming MI300 APUs, and Xilinx FPGAs. And then, of course, there's Graphcore, SambaNova, Cerebras, and a dozen other AI startups looking to make their mark on the industry.

With the rise of smartNIC, DPUs, and IPUs in the networking space and a whole slew of nontraditional architectures like quantum computing and annealing capturing customers imagination, it's safe to say the average datacenter isn't going to look nearly so homogeneous a few short years from now.

And while choice and competition are never a bad thing, the trick for customers is going to be figuring out which vendor's systems or accelerators are the best match for their workloads. And because these systems aren't cheap, the cost of choosing the wrong architecture can be a painful one. Once again, it's the XaaS providers that stand to benefit as they're well positioned to provide support and guidance as customers adapt to heterogeneous compute environments.

Avoiding the walled garden

On the flip side, the trick for XaaS providers like HPE, Dell or Lenovo is avoiding the temptation to build a walled garden around their ecosystem. While some companies – Apple comes to mind – have been wildly successful in this, IDC reports that in the wake of pandemic-fueled supply chain shortages, customers want choice more than ever.

IDC characterizes this as a game of supply chain whackamole, in which for every component shortage solved another pops up again, and by their estimate will continue to until well into 2024.

As such, analysts expect roughly 80 percent of global 5,000 companies will begin sourcing IT infrastructure from multiple providers as early as next year to insulate themselves from future shortages.

And it's not just servers and networking gear, it's the software and services running on them. According to IDC, many customers are looking to migrate to platforms based on open standards to ensure interoperability of systems across multiple vendors.

Thankfully, enterprise customers aren't the first to go down this path. Many of the major cloud providers have adopted a similar approach to networking equipment. Microsoft, for instance, developed and later open sourced the SONiC network operating system to avoid being locked into a single switch vendor's ecosystem.

The good news is that beyond hardware, most XaaS providers have largely relied on software companies, like VMware and Red Hat to fill out their platforms. The exception to this rule is HPE, which has taken a cue from the likes of AWS, Google, and Azure to develop a cloud-style control plane for its GreenLake platform.

GreenLake for Private Cloud Enterprise is a prime example, enabling customers to deploy, manage, and network bare metal, virtualized, and/or containerized workloads from a single cloud-esque control panel. Still, HPE recognizes that not every customer that buys its servers wants its software stack too. So, just like Dell and Lenovo, the company also has engagements with several of the largest software platforms on the market today.

And at least according to IDC, it's these partnerships that customers want. The report projects that by 2024, half of global 2,000 enterprises will base their infrastructure selections on established partner ecosystems. ®

More about

TIP US OFF

Send us news


Other stories you might like