Google apps split with Google File System

Closed source Hadoop jilted for 'Colossus'


Google has moved "most" of its online services off the Google File System that has underpinned its famously distributed back-end infrastructure for a good ten years, according to Google senior vice president of operations Urs Hölzle.

Hölzle – the man who led the development of the Google back-end – tells ZDNet that the company is "phasing out GFS in favor of the next-generation file system that is very similar." Presumably, this is a reference to Colossus, a revamped file system sometimes referred to as GFS2 or GFS II.

In 2009, Google webspam guru Matt Cutts confirmed that Colossus was part of the company's new search infrastructure, codenamed Caffeine. A year later, Google senior director of engineering Eisar Lipkovitz told us that Colossus was specifically built for use with BigTable, Google's distributed database. Caffeine discards Google MapReduce – the company's batch-oriented distributed number-crunching platform – in favor of the realtime setup provided by BigTable.

In short, Caffeine expands on BigTable to create a database programming model that lets the company make changes to its web index without rebuilding the entire index from scratch.

Lipkovitz indicated that Colossus wasn't necessarily suited for use with Google services outside of search, but this was contradicted by an earlier interview Google had given to the Association for Computer Machinery (ACM). According to that interview, Collosus – or GFS2 – was specifically designed for low-latency applications such as Gmail and YouTube. Google's Sean Quinlan said the original GFS was unsuited for such apps, but that GFS2 was.

"There are places in the [original GFS] design where we've tried to optimize for throughput by dumping thousands of operations into a queue and then just processing through them," he said. "That leads to fine throughput, but it's not great for latency. You can easily get into situations where you might be stuck for seconds at a time in a queue just waiting to get to the head of the queue."

The original GFS also had a single point of failure. A master node oversaw data spread across a series of distributed chunkservers, and there was no master node back-up. Google later added automatic failover, but the basic design still posed a problem. "While these instances - where you have to provide for failover and error recovery - may have been acceptable in the batch situation, they're definitely not OK from a latency point of view for a user-facing application," Quinlan said.

Collosus uses distributed masters as well as distributed slaves, and the chunkservers can handle many small chunks of data. This, Quinlan said, lets you spread data across more machines, and he explained that this would allow the Google infrastructure to expand for another ten years.

But in his interview with ZDNet, Hölzle indicated that Collosus will soon give way to another platform. "I think three years from now we'll try to retire that because flash memory is coming and faster networks and faster CPUs are on the way and that will change how we want to do things," he said. "One of the nice things is that if everyone today is using the BigTable compressed database. Suppose we have a better BigTable down the line that does the right thing with flash – then it's relatively easy to migrate all these applications as long as the API stays stable."

GFS and MapReduce were the inspiration for Hadoop, the open source number-crunching platform developed under the aegis of Apache. It's now used by everyone from Twitter to Facebook to Yahoo!. A sister project, HBase, provides an open source version of BigTable for Hadoop. ®

Similar topics

Broader topics


Other stories you might like

  • Zuckerberg sued for alleged role in Cambridge Analytica data-slurp scandal
    I can prove CEO was 'personally involved in Facebook’s failure to protect privacy', DC AG insists

    Cambridge Analytica is back to haunt Mark Zuckerberg: Washington DC's Attorney General filed a lawsuit today directly accusing the Meta CEO of personal involvement in the abuses that led to the data-slurping scandal. 

    DC AG Karl Racine filed [PDF] the civil suit on Monday morning, saying his office's investigations found ample evidence Zuck could be held responsible for that 2018 cluster-fsck. For those who've put it out of mind, UK-based Cambridge Analytica harvested tens of millions of people's info via a third-party Facebook app, revealing a – at best – somewhat slipshod handling of netizens' privacy by the US tech giant.

    That year, Racine sued Facebook, claiming the social network was well aware of the analytics firm's antics yet failed to do anything meaningful until the data harvesting was covered by mainstream media. Facebook repeatedly stymied document production attempts, Racine claimed, and the paperwork it eventually handed over painted a trail he said led directly to Zuck. 

    Continue reading
  • Florida's content-moderation law kept on ice, likely unconstitutional, court says
    So cool you're into free speech because that includes taking down misinformation

    While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072.

    Both Florida and Texas last year passed laws that impose content moderation restrictions, editorial disclosure obligations, and user-data access requirements on large online social networks. The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence.

    Multiple studies addressing this issue say right-wing folk aren't being censored. They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources.

    Continue reading
  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would help defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading

Biting the hand that feeds IT © 1998–2022