This article is more than 1 year old
Google apps split with Google File System
Closed source Hadoop jilted for 'Colossus'
Google has moved "most" of its online services off the Google File System that has underpinned its famously distributed back-end infrastructure for a good ten years, according to Google senior vice president of operations Urs Hölzle.
Hölzle – the man who led the development of the Google back-end – tells ZDNet that the company is "phasing out GFS in favor of the next-generation file system that is very similar." Presumably, this is a reference to Colossus, a revamped file system sometimes referred to as GFS2 or GFS II.
In 2009, Google webspam guru Matt Cutts confirmed that Colossus was part of the company's new search infrastructure, codenamed Caffeine. A year later, Google senior director of engineering Eisar Lipkovitz told us that Colossus was specifically built for use with BigTable, Google's distributed database. Caffeine discards Google MapReduce – the company's batch-oriented distributed number-crunching platform – in favor of the realtime setup provided by BigTable.
In short, Caffeine expands on BigTable to create a database programming model that lets the company make changes to its web index without rebuilding the entire index from scratch.
Lipkovitz indicated that Colossus wasn't necessarily suited for use with Google services outside of search, but this was contradicted by an earlier interview Google had given to the Association for Computer Machinery (ACM). According to that interview, Collosus – or GFS2 – was specifically designed for low-latency applications such as Gmail and YouTube. Google's Sean Quinlan said the original GFS was unsuited for such apps, but that GFS2 was.
"There are places in the [original GFS] design where we've tried to optimize for throughput by dumping thousands of operations into a queue and then just processing through them," he said. "That leads to fine throughput, but it's not great for latency. You can easily get into situations where you might be stuck for seconds at a time in a queue just waiting to get to the head of the queue."
The original GFS also had a single point of failure. A master node oversaw data spread across a series of distributed chunkservers, and there was no master node back-up. Google later added automatic failover, but the basic design still posed a problem. "While these instances - where you have to provide for failover and error recovery - may have been acceptable in the batch situation, they're definitely not OK from a latency point of view for a user-facing application," Quinlan said.
Collosus uses distributed masters as well as distributed slaves, and the chunkservers can handle many small chunks of data. This, Quinlan said, lets you spread data across more machines, and he explained that this would allow the Google infrastructure to expand for another ten years.
But in his interview with ZDNet, Hölzle indicated that Collosus will soon give way to another platform. "I think three years from now we'll try to retire that because flash memory is coming and faster networks and faster CPUs are on the way and that will change how we want to do things," he said. "One of the nice things is that if everyone today is using the BigTable compressed database. Suppose we have a better BigTable down the line that does the right thing with flash – then it's relatively easy to migrate all these applications as long as the API stays stable."
GFS and MapReduce were the inspiration for Hadoop, the open source number-crunching platform developed under the aegis of Apache. It's now used by everyone from Twitter to Facebook to Yahoo!. A sister project, HBase, provides an open source version of BigTable for Hadoop. ®