This article is more than 1 year old

Want to go green like Apple, but don't have billions in the bank?

Cooling data centres without landing in hot water

There's a hole in my bucket

The use of evaporating cooling has allowed companies to drive down their PUE, but with water an increasingly scarce resource, the environmental implications of tens of the thousands of litres of water evaporating into the air are not to be sniffed at. Hopton says data centres are in the same league as wine producers in terms of the water consumption. “With liquid cooling, you can reduce your water usage and keep your PUE,” he notes.

One way out of this conundrum is to simply run your datacentre hot. “The idea that you need to keep your data centre at 21-22 degrees centigrade is a myth,” says Memset managing director Kate Craig-Wood.

“Any server built in the last five years can tolerate temperatures of up to 40°C. We allow our data centre to have excursions to 35 degrees centigrade and as a result we don’t need compressor-based back-up cooling and with that comes significant cost savings. We completely rely on our ‘free’ adiabatic cooling system.”

Data centre layout can have an effect on energy efficiency, too. In Byfleet, changing the physical configuration of racks to a hot-aisle-cold-aisle configuration has improved airflow efficiency, resulting in a much more unified cooling effect across the data floor. Retrofitting cold aisle corridors to force cold air through the servers has improved efficiency by around 15-20 per cent. “It’s an easy win. We saw an instant improvement,” Bedell-Pearce says.

Computational Fluid Dynamic modelling software is a useful tool you can use to model the flow of air and calculate where specific hot spots will occur – both now and in the future.

“Sometimes it can show some counter-intuitive answers. For example, putting high-density racks closer to air conditioning creates negative pressure at the top of units. You actually want lower density kit nearby,” Bedell-Pearce explains.

Small improvements can also help. Something simple like changing the lightbulbs from fluorescent tubes to LEDs can make a noticeable difference when, in the case of 4D-DC, you have 148 to deal with. In that situation it halved the cost of lighting and reduced the heat load created.

Similarly, the installation of grommets to reduce air leakage around cable holes in the floor, blanking plates and expanding foam strips between racks have between them resulted in a five to six per cent jump in efficiency across the 4D-DC data floor.

Ultimately, the route you follow must be carefully planned. Attention right now is on the servers and cooling equipment, but these are low-hanging fruit.

What happens if bodies such as the European Commission get more serious, or your colleagues on the business side of the house start making more exacting demands?

As John Clifford, energy management lead at Cisco in the UK & Ireland, tells us: “This means that any changes to create energy, cost or carbon efficiencies must be modelled and proven, with a high degree of certainty, to have no impact on day-to-day operations. This issue becomes even more complex as data centres become more mature." ®

Read Part II - the tactics of going green - here on El Reg.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like