Early next year, Yahoo! intends to open source its internal "cloud serving" platform, described as something halfway between Amazon's Elastic Compute Cloud and Google's App Engine.
Known simply as "Cloud" within the company, the platform is that piece of Yahoo! infrastructure that serves up its online applications. In short, it provides the company's internal developers with on-demand access to computing resources. But rather than offering raw virtual machines as Amazon EC2 does, it spins up "containers" of server power that are pre-configured for things like load-balancing and security. That way, developers needn't handle the load-balancing on their own.
Google App Engine also handles this sort of nitty-gritty on behalf of the developer, but it goes much further. It hides even more of the underlying infrastructure, and it puts tight restrictions on the design of applications so they'll conform with this infrastructure. It restricts what languages you use. It limits the libraries you can choose from. It even prevents you from making system requests that take more than 30 seconds or return more than 10MB of data.
With its "Cloud," Yahoo! abstracts some of the infrastructure, but it also lets you develop with all those standard LAMP stack tools you're used to. "We don't bless the language," Yahoo! chief architect Raymie Stata tells The Reg. "We bless the container."
The company says its current plan is to open source the platform early in 2011. And eventually, it will open source all its back-end platforms.
The company already uses the open source Hadoop for distributed number crunching - this is used to build its search webmap, among so many other tasks - and last June, it released its very own Hadoop distro. Then, in November, it released its Traffic Server, which handles edge caching, edge processing, and load balancing, while also managing traffic on the company's storage and server-virtualization services.
At some point, it will also open source its storage platform and its data pipeline.
All of which makes Yahoo! quite different from Google, which likes to keep its custom-built back-end platforms to itself. That said, Google has published papers describing its GFS distributed file system and MapReduce distributed number cruncher, and these became the basis for Hadoop. Since then, however, the company has developed a new file system known at least informally as GFS2, and this will eventually be rolled out as part of the company's "Caffeine" search infrastructure.
Amazon's EC2 is also closed, but using its APIs the open source Eucalyptus project has mimicked its setup for those looking to operate their own internal clouds. It's bundled with Ubuntu server, and it's the basis for the new federal government Nebula cloud that's under construction at NASA. ®