Facebook launches data center ticker tape

Bit barn efficiency metrics on a minute-by-minute basis


Facebook has heaped pressure on major data center operators to be more transparent, publishing a dashboard that gives up-to-the-minute figures on the efficiency of the social network's gigantic bit barns.

The dashboards for the company's Prineville, Oregon, and Forest City, North Carolina, datacenters were made available on Thursday, and once the company's new facility in Lulea, Sweden is finished, Facebook will make that data available as well.

"Why are we doing this? Well, we're proud of our data center efficiency, and we think it's important to demystify data centers and share more about what our operations really look like," Lyrica McTiernan, a program manager for Facebook's sustainability team, wrote in a blog post discussing the change.

"Through the Open Compute Project (OCP), we’ve shared the building and hardware designs for our data centers. These dashboards are the natural next step, since they answer the question, "What really happens when those servers are installed and the power’s turned on?""

The dashboard outputs information on both the facility's power usage effectiveness* and water usage effectiveness, as well as the outdoor humidity and temperature. This means data center buffs can see how environmental factors influence the efficiency with which Facebook can run its operation. Facebook designed the real-time information displays in collaboration with agency partners Area 17.

Facebook PUE

Facebook's surprisingly informative Spirograph

In the past, Facebook (and the rest of the tech industry's major data center operators) have been pilloried by Greenpeace and other environmental activists over the efficiency of their data centers, and the types of power they consume.

Thursday's announcement sees Facebook move from a quarterly disclosure of its data centers' efficiency (as favored by Google), to publishing information every minute on a two-and-a-half-hour delay, along with trailing 12-month averages.

"At this time we don't have plans to change, but will continue to provide consistent updates and transparency around these efforts," Google told us in an emailed statement.

The company will also publish as open-source the front-end code for the dashboards on github "sometime in the coming week," allowing other data center operators to use the tech to display information about their own data centers.

"We translate the data from our [Building Management System] into .csv files that are then visualized using the dashboards," a Facebook spokeswoman told us via email. "We've focused on standardizing the data into .csv format so that we can open source the front-end UI such that others who convert their data into .csv format – from whatever management systems they have in place – will be able to visualize their data in this way as well."

Though this all sounds fairly benign, it may upset companies that produce Building Management Systems, such as Schneider Electric, BuildingIQ, and IBM. These firms like to use flashy management dashboards similar to the one Facebook will publish as open source as part of their sales pitch, The Register understands. At the time of writing, Facebook had not responded to queries about the nature of its building management system, and had not told us whether it was self-built or acquired.

Though Facebook uses these announcements to stress the importance of transparency in web infrastructure, it is in a privileged position among the large data center operators: unlike Google, Amazon, and Microsoft, Facebook is not embroiled in a massive IaaS price war, so it need not be quite so paranoid about disclosing any scrap of information about its own facilities that could give a competitor an edge. All Facebook needs to worry about is preserving its user base so it can display ads to them, and any data center info it discloses is unlikely to be useful in Google's efforts to swell the ranks of G+.

This Vulture wishes to extend his thanks to Facebook for not mentioning "Big Data" anywhere in this data-heavy announcement. ®

* Bootnote

Power Usage Effectiveness is an industry standard term that reflects the ratio between the power consumed by the data center supporting infrastructure (cooling, lights, et cetera), and the energy used by its compute, storage, and network resources. Facebook's PUE of 1.09 means that for every watt spent on IT gear, 0.09 watts are spent on supporting it. This compares favorably with data centers operated by Google, and is miles better than the industry average of 1.5 to 1.9.

Similar topics

Narrower topics


Other stories you might like

  • China reveals its top five sources of online fraud
    'Brushing' tops the list, as quantity of forbidden content continue to rise

    China’s Ministry of Public Security has revealed the five most prevalent types of fraud perpetrated online or by phone.

    The e-commerce scam known as “brushing” topped the list and accounted for around a third of all internet fraud activity in China. Brushing sees victims lured into making payment for goods that may not be delivered, or are only delivered after buyers are asked to perform several other online tasks that may include downloading dodgy apps and/or establishing e-commerce profiles. Victims can find themselves being asked to pay more than the original price for goods, or denied promised rebates.

    Brushing has also seen e-commerce providers send victims small items they never ordered, using profiles victims did not create or control. Dodgy vendors use that tactic to then write themselves glowing product reviews that increase their visibility on marketplace platforms.

    Continue reading
  • Oracle really does owe HPE $3b after Supreme Court snub
    Appeal petition as doomed as the Itanic chips at the heart of decade-long drama

    The US Supreme Court on Monday declined to hear Oracle's appeal to overturn a ruling ordering the IT giant to pay $3 billion in damages for violating a decades-old contract agreement.

    In June 2011, back when HPE had not yet split from HP, the biz sued Oracle for refusing to add Itanium support to its database software. HP alleged Big Red had violated a contract agreement by not doing so, though Oracle claimed it explicitly refused requests to support Intel's Itanium processors at the time.

    A lengthy legal battle ensued. Oracle was ordered to cough up $3 billion in damages in a jury trial, and appealed the decision all the way to the highest judges in America. Now, the Supreme Court has declined its petition.

    Continue reading
  • Infusion of $3.5bn not enough to revive Terra's 'stablecoin'
    Estimated $42bn vanished with collapse of UST, Luna – we explain what all this means

    TerraUSD, a so-called "stablecoin," has seen its value drop from $1 apiece a week ago to about $0.09 on Monday, demonstrating not all that much stability.

    The cryptocurrency token, abbreviated UST, is supposed to be pegged to the price of the US dollar. Hence the "stable" terminology.

    But UST is not a "centralized stablecoin" that's exchangeable for a fiat currency; UST for USD (US dollars). Rather, it's a "decentralized stablecoin," meaning it can be exchanged for Luna (LUNA) tokens, another cryptocurrency tied to the Terra blockchain.

    Continue reading
  • DigitalOcean tries to take sting out of price hike with $4 VM
    Cloud biz says it is reacting to customer mix largely shifting from lone devs to SMBs

    DigitalOcean attempted to lessen the sting of higher prices this week by announcing a cut-rate instance aimed at developers and hobbyists.

    The $4-a-month droplet — what the infrastructure-as-a-service outfit calls its virtual machines — pairs a single virtual CPU with 512 MB of memory, 10 GB of SSD storage, and 500 GB a month in network bandwidth.

    The launch comes as DigitalOcean plans a sweeping price hike across much of its product portfolio, effective July 1. On the low-end, most instances will see pricing increase between $1 and $16 a month, but on the high-end, some products will see increases of as much as $120 in the case of DigitalOceans’ top-tier storage-optimized virtual machines.

    Continue reading
  • GPL legal battle: Vizio told by judge it will have to answer breach-of-contract claims
    Fine-print crucially deemed contractual agreement as well as copyright license in smartTV source-code case

    The Software Freedom Conservancy (SFC) has won a significant legal victory in its ongoing effort to force Vizio to publish the source code of its SmartCast TV software, which is said to contain GPLv2 and LGPLv2.1 copyleft-licensed components.

    SFC sued Vizio, claiming it was in breach of contract by failing to obey the terms of the GPLv2 and LGPLv2.1 licenses that require source code to be made public when certain conditions are met, and sought declaratory relief on behalf of Vizio TV owners. SFC wanted its breach-of-contract arguments to be heard by the Orange County Superior Court in California, though Vizio kicked the matter up to the district court level in central California where it hoped to avoid the contract issue and defend its corner using just federal copyright law.

    On Friday, Federal District Judge Josephine Staton sided with SFC and granted its motion to send its lawsuit back to superior court. To do so, Judge Staton had to decide whether or not the federal Copyright Act preempted the SFC's breach-of-contract allegations; in the end, she decided it didn't.

    Continue reading

Biting the hand that feeds IT © 1998–2022