Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

The data centre design that lets you cool down – and save electrons

A chilling tale from the server aisles

I started my commercial data centre experience in London in the late 1990s. Even back then, most of the service providers were parroting the same mantra: “Your power provision is limited, and we'll charge you through the nose for anything over the basic consumption figure you've signed up to.”

The logic most of them gave was the supply to the building/floor was limited, so they had to be strict. And it's no surprise really. In populous areas such as London’s Docklands you have large numbers of tall buildings all sucking up electrons, so filling a few floors of a building with hundreds of power-intensive hosting cabinets without some kind of rationing isn't exactly going to help the situation.

So you had to be a bit creative about how you implemented your systems, as throwing infinite amounts of noisy kit at your cabinets just wasn't an option.

I grew up, then, thinking service providers restrict your power consumption simply because they have to share a limited number of amps across their customer base.

And it'd be forgivable for anyone in an inner-city data centre to think the same. In reality, though, there's far more to the story than just the amps your kit draws. Think about the end-to-end power problem for the service provider.

The service provider's power problem

First, you can't escape the fact that they have a non-trivial power requirement to run their own infrastructure, the network services they provide to their data centre customers, and the basics such as lighting and the coffee machine in the break room.

Happily, this is largely constant once it's initially been put together – although they may add a switch or two and the odd router, the power draw isn't going up by an order of magnitude over time.

Then they have to power the equipment you're putting in the cabinets. The more customers they host, the more the power requirement – it's pretty much a direct correlation.

There's another direct correlation too, though: the more kit you have, the more power it'll draw. But the fact that's easily forgotten is that the more power you're drawing, the more heat you're generating. And the more heat you generate, the more the service provider must do to remove it, to keep the data centre’s temperature down.

Heat's not easy to transmit

Getting power to a server is easy: you can run a nice slender, tidy cable along a tray under a raised floor and present it in an idiot-proof way to the customer. Stuff 3kW in one end and aside from a negligible bit of loss due to resistance you'll pretty much get 3kW out of the other end.

The heat efflux of servers and switches isn't so easy to transmit out of the room, though: there's no cheap, elegant way to take a couple of kW out of the back of a cabinet and squirt it all through some piping to a heat exchanger.

That's why the backing track in a data centre hosting room is the roar of overpowered fans – they have to cycle all the air, not just the hot bits, as you can't identify just the hot bits.

What you can do, though, is make the heat output of the contents of the data centre more predictable. After all, you know where the heat is coming from (the cabinets), so what can be done to ease the task of heat dissipation?

Similar topics

Similar topics

Similar topics

TIP US OFF

Send us news


Other stories you might like