Is a lack of standards holding immersion cooling back?
There are just so many ways to deep fry your chips these days
Comment Liquid and immersion cooling have undergone something of a renaissance in the datacenter in recent years as components have grown ever hotter.
This trend has only accelerated over the past few months as we’ve seen a fervor of innovation and development around everything from liquid-cooled servers and components for vendors that believe the only way to cool these systems long term is to drench them in a vat of refrigerants.
Liquid and immersion cooling are by no means new technologies. They’ve had a storied history in the high-performance computing space, in systems like HPE’s Apollo, Cray, and Lenovo’s Neptune to name just a handful.
A major factor driving the adoption of this tech in traditional datacenters is a combination of more powerful chips and a general desire to cut operating costs by curbing energy consumption.
One of the challenges, however, is many of these systems employ radically different form factors than are typical in air-cooled datacenters. Some systems only require modest changes to the existing rack infrastructure, while others ditch that convention entirely in favor of massive tubs into which servers are vertically slotted.
The ways these technologies are being implemented is a mixed bag to say the least.
Immersion cooling meets rack mount
This challenge was on full display this week at HPE Discover, where the IT goliath announced a collaboration with Intel and Iceotope to bring immersion-cooling tech to HPE’s enterprise-focused Proliant server line.
The systems can now be provisioned with Iceotope's Ku:l immersion and liquid-cooling technology, via HPE’s channel partners with support provided by distributor Avnet Integrated. Iceotope's designs meld elements of immersion cooling and closed-loop liquid cooling to enable this technology to be deployed in rack environments with minimal changes to the existing infrastructure.
Ice's chassis-level immersion-cooling platform effectively uses the server’s case as a reservoir and then pumps coolant throughout to hotspots like the CPU, GPU, or memory. The company also offers a 3U conversion kit for adapting air-cooled servers to liquid cooling.
- Castrol, Submer shift gears to datacenter immersion cooling
- Immersion cooling no longer reserved for the hyperscalers, HPC
- Nvidia brings liquid cooling to A100 PCIe GPU cards for 'greener' datacenters
- Intel plans immersion lab to chill its power-hungry chips
Both designs utilize a liquid-to-liquid heat exchanger toward the back of the chassis, where deionized water is pumped in and heat is removed from the system using an external dry cooler.
This is a stark departure from the approach used by rival immersion-cooling vendors, such as LiquidStack or Submer, which favor submerging multiple systems in a tub full of coolant — commonly a two-phase refrigerant or specialized oil.
While this approach has shown promise, and has even been deployed in Microsoft’s Azure datacenters, the unique form factors may require special consideration from building operators. Weight distribution is among operators’ primary concerns, Dell’Oro analyst Lucas Beran told The Register in an earlier interview.
Standardized reference designs in the works
The lack of a standardized form factor for deploying and implementing these technologies is one of several challenges Intel hopes to address with its $700 million Oregon liquid and immersion cooling lab.
Announced in late May, the 200,000-square-foot facility, located about 20 miles west of Portland at its Hillsboro campus in the US, will qualify, test, and demo its expansive datacenter portfolio using a variety of cooling tech. The chipmaker is also said to be working on an open reference design for an immersion-cooling system that’s being developed by Intel Taiwan.
Intel plans to bring other Taiwanese manufacturers into the fold before rolling out the reference design globally. Whether the x86 giant will be able to bring any consistency to the way immersion cooling will be deployed in datacenters going forward remains to be seen, however.
Even if Intel’s reference design never pans out, there are still other initiatives pursuing similar goals, including the Open Compute Project’s advanced cooling solutions sub project, launched in 2018.
It aims to establish an ecosystem of servers, storage, and networking gear built around common standards for direct contact, immersion, and other cooling tech.
In the meantime, the industry will carry on chilling the best ways it can. ®