So, Google borged a mystery chip designer that was working on "some kind of server," and the web is convinced the Chocolate Factory is merely interested in using this all-star startup to build a GPad. How quickly the web forgets that Google is the world's fourth-largest server maker.
According to a New York Times source "familiar with the deal," Google acquired San José chip designer Agnilux not to build chips but to port its Chrome OS and Android operating systems to things like tablets and TV set-top boxes. And on one level, this makes sense. Agnilux was formed by ex-PA Semi employees once pulled into Apple to build SoCs for the iPod, the iPhone, and apparently the iPad, and it's no secret Google is exploring consumer devices well beyond its Nexus One phone.
But an earlier Times story indicated that Agnilux was brewing "some kind of server." The company apparently had a partnership with Cisco. And its roots can be traced back to server chips like the DEC Alpha and the AMD Opteron. It's been rumored for years that Google is interested in building server chips of its own, and if it hasn't already, you can bet that one day it will - with or without Agnilux engineers.
Google likes to say it's not a hardware company. When the ad broker launched the Nexus One, it went to unusual lengths to convince the world it played no part in the design of the physical device. But at the same time, it builds hardware on an epic scale for use across the Googlenet, a private infrastructure that handles more traffic than all but a pair of tier one ISPs.
It was recently estimated that Google runs 2 per cent of the world's servers, and it would seem that all of them are custom-built. Reports also indicate that Google builds its own routers, and it's no secret the company fashions its own data centers, piecing them together with hardware-packed shipping containers.
Last fall, Google released a brief video of a data center it built in 2005. The facility held 45 shipping containers, each housing 1,160 servers. Google is now operating about 35 data centers across the globe, and if you extrapolate, its total server count - server consolidation aside - is around 1,827,200. That figure is well above recent press estimates. And it may be low. After all, that data center was built in 2005.
According to a recent public presentation from the company, Google is intent on scaling its worldwide infrastructure to between one million and 10 million servers, encompassing 10 trillion directories and a quintillion bytes of storage. And this would span “100s to 1000s” of locations around the globe.
All these servers need chips. But the thing to remember about the ever-expanding Googlenet is that it's designed to process tasks that are broken into tiny little pieces. Google isn't interested in running the fastest processors on the planet. It's interested in running efficient chips that suit its pathological obsession with distributed computing.
When it released that data center video, Google also gave the world a peek at the battery-backed, two-socket server nodes it packs into at least some of its data centers. Based on a Gigabyte motherboard, each node included two disks, eight memory slots, and a 12-volt DC power supply. These nodes use both Intel and AMD chips, and it would seem that the company stops short of the bleeding x64 edge, choosing processors that provide the best performance per watt.
Plus, it wants chips that can run hot. To save costs - and, um, the planet - Google operates its data centers at temperatures above the norm. According to a former employee, at one point Google was buying chips from Intel guaranteed to operate at temperatures five degrees centigrade higher than their standard qualification.