This article is more than 1 year old
Your servers are underwater? Chill out – liquid's cool
Submerge the servers in oil
A cooler future
Advocates and vendors such as Iceotope and Green Revolution believe the transition from air cooling to liquid cooling – either exclusively or in combination with new, innovative free cooling methods – is inevitable.
They believe HPC and supercomputer facilities are essentially operating as green field test beds and proof of concepts for the increasingly large and dense server footprints necessitated by a global shift to hyper-scale data centres and cloud-based service providers.
Recent analyst forecasts agree that cooling, spearheaded by new advanced techniques, is an area of sharp growth. In its April 2015 report Global Data Center Cooling Market 2015-2019, Technavio predicted a compound annual growth rate of 14.3 per cent over the next five years.
The firm believes this growth will likely be driven by increased adoption of new cooling technology, deployed to reduce power consumption and energy bills in large data centre investments. Liquid cooling deployments that require significantly greater capital expenditure than air-cooling alternatives can more easily be justified with a longer-term ROI in mind.
If broader-scale liquid cooling adoption in data centres is inevitable, then extending the same technology to the direct cooling of other kit – particularly increasingly powerful networking equipment – would seem a logical extension to the liquid cooling approach and perhaps negate the need to expend energy cooling the ambient temperature of equipment rooms.
Maybe it's possible, even at this embryonic stage of liquid cooling adoption in the mainstream, to jump way ahead and begin planning the next-generation of data centres that will bypass both air and dielectric liquid cooling methods? What about liquid metal?
Those familiar with building their own PC systems may have researched the idea of using metal alloys in place of conductive paste as the thermal compound between their CPU and heat sink. A decade ago, US startups expanded this concept and began exploring the idea of routing pipes of fluid around hot processors.
The technique was to use much the same method as the direct liquid cooling systems we know now, except employing metal alloys of indium and gallium (soft or liquid at room temperatures), rather than mineral oils or other fluids.
The key advantages cited for liquid metal cooling include a far greater capacity to conduct heat when compared with existing dielectric liquids, and the side-effect of being able to take advantage of electromagnetic pumps without moving parts.
Sadly for the pioneers of this approach, the disadvantages for mainstream adoption included the very high costs, the greater size and weight of a practical solution, and a lack of appetite for liquid metal cooling in data centre or enterprise applications.
Liquid metal pioneers were staking their claims for a share of the lucrative cooling market way too early. In the intervening decade, commercial liquid metal start-ups went bust, but the concept continues to be refined academically and deployed in extreme applications, notably in nuclear reactor sites.
Meanwhile, innovations since then in the field of dielectric fluid could finally make wider use of liquid cooling for our data hothouses not just practical but inevitable. ®