Open source API dreams of The Meta Cloud

Calling the cloud of clouds


The Meta Cloud is one step closer to meta-reality.

Last week, at OSCON, a San Jose startup known as Cloudkick unveiled an open source project that hopes to provide a single programming interface for a host of so-called infrastructure clouds, including Amazon EC2, Rackspace Cloud Servers, Slicehost, and GoGrid. Dubbed libcloud, the project reaches for a world where developers can build an app that's easily shuttled from one cloud to another.

You might call it The Meta Cloud API.

Cloudkick already offers its own RightScale-like management tool for overseeing the use of Amazon EC2-like infrastructure clouds - i.e. web services that provide on-demand access to scalable compute resources. And with this management tool, you can juggle multiple clouds from the same web dashboard. But with libcloud, the company has expanded on the cloud-of-clouds idea by providing a common API for such services.

"libcloud is useful for anyone who wants to write some sort of software that works between clouds," Cloudkick's Alex Polvi tells The Reg. "If you wanted to, say, develop tools that automatically move your loads to the cheapest provider, there could potentially be a libcloud implementation that does that."

Emphasis on potentially. At the moment, you can use a single API call to list server instances across Amazon EC2 and EC2 Europe, Rackspace Cloud Servers, Slicehost, VPS.net, and GoGrid. And another call lets you reboot servers across both EC2 and EC2 Europe. But that's the extent of it.

The ultimate goal is to create an API that handles just about everything across these disparate clouds - and others, including Flexiscale and the open source private cloud platform Eucalyptus.

OSCON also saw Rackspace open source its own Cloud Servers APIs, with the hope fostering an industry standard for infrastructure clouds. But for the foreseeable future, as Amazon continues to resist such efforts, we're stuck with incompatible interfaces ripe for a client library along the lines of libcloud.

Written entirely in Python, the project is hosted here on Github. ®

Similar topics


Other stories you might like

  • Warehouse belonging to Chinese payment terminal manufacturer raided by FBI

    PAX Technology devices allegedly infected with malware

    US feds were spotted raiding a warehouse belonging to Chinese payment terminal manufacturer PAX Technology in Jacksonville, Florida, on Tuesday, with speculation abounding that the machines contained preinstalled malware.

    PAX Technology is headquartered in Shenzhen, China, and is one of the largest electronic payment providers in the world. It operates around 60 million point-of-sale (PoS) payment terminals in more than 120 countries.

    Local Jacksonville news anchor Courtney Cole tweeted photos of the scene.

    Continue reading
  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading

Biting the hand that feeds IT © 1998–2021