This article is more than 1 year old
Server shipments to fall 20% this year, but AI means vendors still raking it in
Less is more, as hyper heterogeneous computing heats up
Server unit shipments for 2023 could crash by up to 20 percent on last year, despite revenue growing. The cause is hyper heterogeneous computing which is driving up the silicon content of systems, according to Omdia.
Datacenters are being reshaped by the demands of AI, as The Register has reported previously. This shift has led to a call for fewer but more highly configured and costlier systems, hence revenue continuing to rise.
For its latest Cloud and Data Center Market Update, Omdia forecasts a decline of 17 to 20 percent for server shipments across the whole of 2023, while revenue is set to expand by 6 to 8 percent.
The market researcher coined a term for this trend: hyper heterogeneous computing. This refers to servers configured with co-processors to optimize their performance for specific applications.
One example of this trend is servers with AI accelerators. The most popular configuration for large language model training is Nvidia’s DGX server, Omdia says, configured with 8 H100 or A100 GPUs. However, the label also covers Amazon’s servers for AI inferencing, which are configured with custom-built co-processors, called Inferentia 2.
Hyper heterogeneous computing can also include systems with other types of co-processor, such as Google's video transcoding servers with 20 custom-built Video Coding Units (VCUs). Facebook parent Meta has similar, with its video processing servers featuring 12 custom-built Meta Scalable Video Processors.
Omdia says this trend is pushing up the silicon content of servers, such that it expects CPUs and co-processors to account for 30 percent of datacenter spending by 2027, compared with less than 20 percent during the previous decade.
As well as media processing and AI, the company expects workloads such as databases and web services to get similar treatment in future.
(Databases arguably already have accelerators in the form of Computational Storage SSDs that can boost Key-Value performance with an on-chip processor.)
In terms of GPUs, Microsoft and Meta are outpacing other hyperscalers in their deployment, with both companies set to have received 150,000 of Nvidia's H100 accelerators by the end of this year, according to Omdia's figures. This is three times as many as Google, Amazon or Oracle.
Judging by Omdia's data, hyperscale cloud providers are soaking up the supply to the extent that server makers such as Dell, Lenovo and HPE are struggling to fulfil GPU server orders due to a lack of allocation from Nvidia, with lead times of 36-52 weeks for servers configured with H100 GPUs.
- Amazon says it's ready to train future AI workforce
- Nvidia revenue explodes, led by datacenter products and … InfiniBand?
- Nvidia intros the 'SuperNIC' – it's like a SmartNIC, DPU or IPU, but more super
- What's really going on with Chrome's June crackdown on extensions – and why your ad blocker may or may not work
These highly configured servers are also fueling demand for datacenter power and cooling infrastructure. In the first half of this year, revenue for rack power distribution kit was up 17 percent year-on-year, while spending on UPS kit was up 7 percent.
Liquid cooling, however, is where the big growth is expected. Based on data from OEMS, Omdia estimates spending on direct-to-chip liquid cooling to rise 80 percent this year, while hyperscaler supplier Supermicro indicated it expects 20 percent of the systems it ships in Q4 will be fitted with liquid cooling.
Prefabricated datacenter modules are on the rise as a way of quickly adding power capacity. Omdia reckons that some vendors have reported a doubling in demand for prefabricated modules to house extra switchgear, UPS, batteries, or power distribution kit.
Omdia expects to see overall datacenter spending grow by 10 percent a year between now and 2027, when it forecasts it will reach a total of $468.4 billion. ®