On-Prem

Systems

Equinix to cut costs by cranking up the heat in its datacenters

Dude, not cool


In the hopes of cutting its power bills, Equinix says it's turning the thermostat up in its datacenters.

The colocation provider expects to increase the operating temperature of its server halls to as much as a rather balmy 27C – that's about 80F in freedom units – to align with American Society of Heat, Refrigerating, and Air Conditioning Engineers (ASHRAE) standards.

These rules, which specify the acceptable range of temperature and humidity within a datacenter, actually allow for temperatures up to 32C or 89.6F, according to ASHRAE's latest A1 revision used by Equinix. For reference, depending on location, season, and equipment available, Equinix facilities today seem to range from about 19C (66F) to 25C (77F), from what we can tell.

As it stands, Equinix says cooling accounts for about 25 percent of its total energy usage globally. That's already better than most. As we're previously reported, 30 to 40 percent of a datacenter's energy consumption can be attributed to thermal management systems such as air conditioners.

"Once rolled out across our current global datacenter footprint, we anticipate energy efficiency improvements of as much as 10 percent in various locations," Raouf Abdel, EVP of global operations at Equinix, said in a statement.

In addition to higher ambient temperatures, Equinix also plans to make greater use of outside air to cool its datacenters, a technique sometimes referred to as "free cooling." The idea being, rather than running air conditioners 24/7, datacenters, especially those in cool climates, use outside air instead or more of it. This should, ideally, all lead to lower costs for Equinix.

Actual progress toward this goal will be measured in power usage effectiveness (PUE). The industry standard metric measures how efficient a datacenter is by comparing the amount of power used by compute, storage, or networking equipment against total utilization. The closer the PUE is to 1.0, the more efficient the facility.

While liquid-cooled datacenters can achieve PUEs as low as 1.03, it's not uncommon for air-cooled facilities to end up closer to 1.5. This may be one of the reasons Equinix is suddenly so interested in liquid and immersion-cooling tech.

Faced with growing energy prices around the world, Equinix's decision to run its datacenters a little hotter isn't all that surprising. But while lower power bills from less active cooling will certainly benefit Equinix's bottom line, convincing customers could be a bit trickier.

The bit barn is selling the idea as "enabling thousands of Equinix customers to reduce their scope 3 carbon emissions associated with their datacenter operations." Scope 3 emissions refer to greenhouse gasses generated by the sale, transportation or use of goods over their lifespan.

While Equinix efforts may help some enterprises move the needle on their sustainability goals, the hotter ambient temperatures within these datacenters could result in thermal challenges and even higher bills for some customers.

For one, depending on the age of customers' systems, some may not handle the higher temperatures as gracefully as newer systems. While ASHRAE standards may allow for higher temperatures within datacenters, it's still important to double-check that the systems can actually handle those operating conditions, Omdia analyst Moises Levy told The Register in an interview.

Second, customers running newer systems, like those using AMD's 400W Epyc 4 CPUs, may find that they need to run their system fans faster and by extension consume more power. On modern systems, fans account for roughly 15 percent of a server's energy consumption, and in hotter climates it can be as high as 20 percent, Lenovo's Scott Tease previously told The Register.

However, Equinix won't be putting these changes into effect overnight. In fact, it appears the colocation provider will start turning up the temperature only after it's finished defining a "multi-year global roadmap" on how to do it. ®

Send us news
30 Comments

First the Super Bowl, now this: Kansas City getting a Google bit barn

Exact location, power source, and go-live date unknown. But don't worry, there'll be digital jobs

As AI booms, land near nuclear power plants becomes hot real estate

Cheap low-carbon energy? What's not to love...

Google searches for boss to get grip on climate, energy costs of this AI hype cycle

Hyperscalers increasingly looking over shoulders as workloads grow

Coherent lights the way to massive AI clusters with optical circuit switches

Could end-to-end lasers keep long training jobs on track?

Plans to heat districts with datacenters may prove too hot to handle

Report points out the difficulties of getting such a system right

AI bubble or not, Nvidia is betting everything on a GPU-accelerated future

LLMs powering generative AI may be moving GPUs, but Huang and co already looking at next big opportunity

Nvidia turns up the AI heat with 1,200W Blackwell GPUs

Five times the performance of the H100, but you'll need liquid cooling to tame the beast

Nvidia rival Cerebras says it's revived Moore's Law with third-gen waferscale chips

Startup is also working with Qualcomm on optimized models for its Cloud AI 100 Ultra inference chips

Fujitsu to shutter operations in Republic of Ireland

In wake of Post Office Horizon scandal, global execs set new profit target, and Irish ops fell short

One rack. 120kW of compute. Taking a closer look at Nvidia's DGX GB200 NVL72 beast

1.44 exaFLOPs of FP4, 13.5 TB of HBM3e, 2 miles of NVLink cables, in one liquid cooled unit

Nvidia: Why write code when you can string together a couple chat bots?

GPU giant says NIM will eliminate dependency headaches for the low low cost of $4,500/year per GPU

Caffeine makes fuel cells more efficient, cuts cost of energy storage

Boffins show less platinum may be needed for long-lived power source