Google chucks $757m at data center empire

60% surge in secret hardware cash


Google's capital expenditures – the amount of money forked into its worldwide network of data centers – reached $757m in the third quarter, their highest level since early 2008, when the company was erecting at least three new data center facilities in the US.

In fact, the first quarter of 2008 was the only quarter in the company's history when it spent more on its data centers – $842m – than in three months ending September 30. We know that the company is currently building a new data center in Finland – a newspaper destruction metaphor sitting on the site of an abandoned paper mill – but there are presumably other new projects underway as well.

Google did not immediately respond to an inquiry seeking an explanation for the steep rise in spending. The company likes to keep quiet about the location and design of data centers. In April 2009, the company at longlast lifted the curtain on its famously modular data center design – but it only showed bits and pieces of its very first modular facility, which had been built four years earlier.

The epic ad broker now owns at least 37 data centers across the globe, including the unfinished facility in Hamina, Finland. But over the past three years, the Finland facility was the only new data center revealed to the public. In the summer of 2007, Google announced it was building a trio of new data centers in the US – in Goose Creek, South Carolina; Pryor, Oklahoma; and Council Bluffs, Iowa – and its spending reached unprecedented levels in the first quarter of the next year. But then the bottom fell out of the US economy.

The company went from spending $842m in the first quarter of 2008 to a mere $139m in the second quarter of 2009, and along the way, it delayed construction of the Oklahoma facility. Spending has slowly increased since the middle of 2009, but it took a rather large leap during this last quarter, jumping 60 per cent, from $476m to $757m.

Most likely, the Finland data center is responsible for some of the increase – the facility is slated to cost $260m, including the $52m purchase of the paper mill – but this can't account for it all.

Data Center Knowledge has a nice graph showing the ups and downs of Google's capital expenditures.

Mountain View has said that in order to roll out Google Instant – a new version of its search engine that serves up results pages as you type – it increased the capacity of its back-end. But it also downplayed the extent of this extra capacity, putting more emphasis on its efforts to designed Google Instant in a way that minimizes the need for added servers.

"One solution would have been to simply invest in a tremendous increase in server capacity, but we wanted to find smarter ways to solve the problem," reads a blog post from distinguished engineer Ben Gomes. "We did increase our back-end capacity, but we also pursued a variety of strategies to efficiently address the incredible demand from Google Instant."

During the press event in San Francisco announcing the service, one engineer said that the Google Instant servers keep track of what data the browser already has and what data is already being gathered by other servers and that for the roll-out of the service, the company had improved its caching system. At one point, Gomes indicated the new caching system is related to to Google Caffeine, the new search index software infrastructure that rolled out across the company's data centers earlier this year.

Eventually, Google plans to expand its network across "100s to 1000s" of locations around the world. So, if it hasn't started data center number 38, it will. ®

Broader topics


Other stories you might like

  • It's 2022 and there are still malware-laden PDFs in emails exploiting bugs from 2017
    Crafty file names, encrypted malicious code, Office flaws – ah, it's like the Before Times

    HP's cybersecurity folks have uncovered an email campaign that ticks all the boxes: messages with a PDF attached that embeds a Word document that upon opening infects the victim's Windows PC with malware by exploiting a four-year-old code-execution vulnerability in Microsoft Office.

    Booby-trapping a PDF with a malicious Word document goes against the norm of the past 10 years, according to the HP Wolf Security researchers. For a decade, miscreants have preferred Office file formats, such as Word and Excel, to deliver malicious code rather than PDFs, as users are more used to getting and opening .docx and .xlsx files. About 45 percent of malware stopped by HP's threat intelligence team in the first quarter of the year leveraged Office formats.

    "The reasons are clear: users are familiar with these file types, the applications used to open them are ubiquitous, and they are suited to social engineering lures," Patrick Schläpfer, malware analyst at HP, explained in a write-up, adding that in this latest campaign, "the malware arrived in a PDF document – a format attackers less commonly use to infect PCs."

    Continue reading
  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading

Biting the hand that feeds IT © 1998–2022