This article is more than 1 year old

Follow the money: Chasing cheap data centre power

Energy-saving algorithm tested

It will soon be possible for data centre operators to move workloads between data centres in pursuit of the cheapest electricity supply.

As reported in the MIT Technology Review, researchers from MIT, Carnegie Mellon University and Akamai developed and tested a routing algorithm that showed energy savings of up to 40 per cent could be made by moving workloads from expensively-supplied data centres to ones with cheaper electricity.

US and other electricity prices change in response to seasonal demand changes, fuel cost moves, and local consumer demand rises and falls. There is a lot of volatility, even with prices in nearby locations. No one location is always cheaper than another, and prices can change by the hour as well as daily.

Major data centres can use a whopping amount of watts and cost their owners upwards of $30m a year for electricity. If they could move data workloads from a data centre that has seen a hike in its electricity cost to a lower-cost centre, then this bill could be cut.

The MIT researchers worked with Akamai and its internet content distributed server infrastructure to use its traffic routing data to test out their ideas. They tracked over three years of electricity supply costs in 29 cities in the USA. With this data to hand a routing algorithm was developed that calculated energy cost savings if data workloads were moved from high-cost-to low-cost data centres.

The algorithm was tested against real Akamai data and potential savings of 40 per cent in energy costs were found. In effect this is an arbitrage operation; it has an obvious appeal to cloud computing service providers with several data centres who could move workloads to save energy cost and play expensive energy supplies against cheaper ones.

This idea could also be used by energy suppliers themselves. They could negotiate with large data centre customers to move workloads when they were facing excess demand in the geographic area they supply.

Were a data centre operator to implement this routing algorithm it would need to be able to track the electricity supply costs of a potentially movable workload and subtract the electricity supply cost for the idled workload infrastructure in the first, higher-cost supply data centre from the savings for running that workload in the second, lower-cost supply destination data centre.

Instrumentation would need to be added to data centres to do this so that you could, for example, produce on request the current and forecast electricity supply costs for operating and cooling an identifiable and moveable workload running on a set of servers, storage and networking boxes. Where such workloads are based on virtualised servers with virtual machine and allied resource instantiation dependent upon demand, that algorithm could become hugely complex. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like