Every time we Tweet, procrastinate by watching an online video of a puppy with hiccups, or query a cloud, we spin up a chain reaction of hardware and electrons in some data centre somewhere. This generates heat that must be dissipated.
Moore’s Law – the observation that recently celebrated a 50-year milestone and which maintains that processor complexity would grow exponentially – has found its greatest obstacle in heat.
Cramming ever more components into silicon demands more electrical power and generates more heat, dramatically affecting performance and the life of equipment if not properly cooled.
If you’ve ever been to a data centre, what becomes immediately obvious is the necessity and urgency to reduce the heat, to protect the equipment and the premises: ambient air-cooling, namely fans; air-conditioners; humidifiers; cooling towers; and other gear to cool racks of servers.
All that is estimated to account for around 40 per cent of the energy bill of a typical data centre.
However, modern data centres need more innovative approaches to cooling to slash this expenditure and extend the life of capital expenditure investments. Not forgetting, of course, the increasing pressure on data-centric businesses to prove their green credentials and demonstrate energy efficiency and reduced environmental impact.
But if installing more cooling equipment isn’t the way, what is?
The concept of using liquid rather than air to manage greater heat loads generated by higher-density racks is certainly nothing new. Liquid cooling patents and products have been regularly announced by dedicated vendors.
However, it remains a technology that has found more fertile ground in niche high-performance computing (HPC) environments than in mainstream data centre applications, where it is certainly still more talked about than implemented.
Most liquid cooling already deployed in server racks relies on the principle that filling copper heat pipes with liquid coolant will dissipate heat faster from components than copper cooling fins and ambient air alone. Sometimes capillary pipes are used to take liquid directly to major system components, with the rest still cooled by air.
But vendors are increasingly choosing to encapsulate all server components within a sealed unit instead, causing that module itself to effectively become one large heat pipe. The unit is then filled with an efficient heat-convecting liquid to cool all active components directly.
Because liquid cooling eliminates the need for air management and humidity controls within the data centre, power densities of five times or more that of traditional air-managed server rooms are claimed as achievable by vendors of liquid cooling systems.
UK-based liquid cooling specialist Iceotope uses Novec in its sealed server blade modules to cool components directly. Novec is a dielectric fluid engineered by 3M that is not electrically conductive, and is claimed to allow heat convection up to 20 times faster than water, depending on application.
Once housed in a rack, heat is taken away from these hot-swappable modules by water channelled into each enclosure to act as a secondary heat transfer.
The warm output water can then be used in other parts of the business away from the data centre, such as underfloor heating. Iceotope claims that its liquid cooling approach combined with hot water cooling requires 75 per cent less data centre area and enables 40-50 per cent reductions in data centre build costs.
Peter Hopton, founder and chief visionary officer at Iceotope, said: “Total liquid cooling means that servers or electronics breathe no air and every component is cooled by capturing the heat to liquid. It is advantageous, not only because liquids are more efficient at transmitting heat, but due to the large amount of infrastructure displaced by the removal of the need to have air handling.”