This article is more than 1 year old

Time to lower the data centre’s temperature

Keep cool with the latest in servers

At first glance the data centre power and cooling equation seems straightforward. More processing power calls for more energy, resulting in a need for costly cooling measures.

Dig a little deeper, however, and you find it is nowhere near that simple. There are lots of ways of boosting data centre performance without raising temperatures.

Virtualisation has to be one of the most effective ways of going about this task, making it possible to both reduce the number of physical servers and take better advantage of the available hardware to host the same workloads, if not bigger ones.

Half measures

Most data centre managers have cottoned on to this idea, but not all. Moreover, even where virtualisation has been employed it is common to find scope for yet more consolidation.

Estimates vary but it is unusual to find companies with more than half their servers virtualised. Added to which there’s a general reluctance to raise virtual to physical machine ratios to the limit and so fully exploit the energy and cooling gains this can deliver.

The reasons is partly to try to avoid single points of failure, but with careful planning that doesn’t have to be major consideration.

It is also down to a reluctance to push processors too hard in the commonly held, but erroneous, belief that by going beyond 50 to 60 per cent utilisation you are somehow “stressing” the hardware and making it more prone to failure.

That is not the case. Modern servers are designed to be run to capacity and can support a far greater virtual machine population than most data centre admins allow.

Less is more

It is probably also true that admins like to leave some processing power in reserve – just in case. But under-used servers still consume power and still need to be cooled, so why not make the most of what they are capable of doing?

Another way of balancing the equation is to replace your old servers with something newer, even where the hardware is only a couple of years old. From a budgeting perspective it sounds like a non-starter, but it can save money on energy and cooling bills and help out in other ways too.

Firstly, the bar on processing power continues to be raised so that, given the right configuration, you are pretty much guaranteed to be able to do more work with fewer servers than you do now.

Equally, there’s an ever increasing emphasis on energy efficiency, again enabling you to do more for less.

With energy prices rising, you should be able to recoup the investment quickly

Processors are more efficient, and cooling technology, too, has advanced, with managed airflows now commonplace in both rack and blade server platforms.

Naturally, there’s a cost involved but with energy prices seemingly rising every day, you should be able to recoup the investment fairly quickly.

Plus there are other potential benefits, such as being able to justify moving to a more flexible and scalable blade architecture now, rather than having to wait until your rack servers come to the end of their life.

One last advantage is the trend toward addressing power management issues in the hardware, which sees vendors building extra thermal sensors into their designs and providing additional, very granular power and cooling controls.

Hot and bothered

We are not just talking about spinning down idle screens or disks, but turning processing cores on and off to match workload demands, varying fan speed differentially across the chassis and being able to set an overall limit on the power that servers are allowed to consume.

Such power management features are now commonplace in products from all the major vendors, as are supporting management interfaces and integration with the top system management platforms.

All are designed to keep your servers running at maximum efficiency and prevent them, and you, from getting hot under the collar. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like