This article is more than 1 year old

Inside Amazon's web services

SLAs for us but not for you, says Amazon's CTO

In 2006, Amazon.com launched several web services aimed at developers: the Simple Storage Service (S3) offering unlimited internet storage, the Simple Queue Service for reliable message delivery, and the Elastic Compute Cloud (EC2) which lets developer create and manage virtual server instances programmatically.

They are innovative, keenly priced on a pay-as-you-go basis, and generally work as advertised. At the same time it seems incongruous to be selling books one day, and web services the next. Not according to Amazon’s CTO, Werner VogelsMugshot of Werner Vogels, Amazon’s CTO., who spoke to us at the London QCon last month.

“Amazon is a technology platform, not just an internet bookshop,” he says. “You might see Amazon’s secret sauce as being personalization, or recommendation, but what really makes Amazon operate is to be able to scale.” Amazon’s web services open its platform to third-parties as well as making use of spare capacity.

A barrier to adoption of Amazon’s web services is the absence of any SLA (Service Level Agreement), making some businesses reluctant to entrust data or critical services to Amazon. “They are absolutely correct,” says Vogels, with disarming frankness. “You have to understand that this is a nascent business. So we have to figure out on our side how to give these guarantees. It doesn’t make sense to guarantee things, and then not be able to meet those guarantees. It is better to explain to people that there are no guarantees at the moment, except high level statements that it is fast and reliable, instead of lying to them.”

That said, has anyone lost data? Have there been outages? “We’ve lost nobody’s data. We’ve had a few performance blips that didn’t affect everyone,” Vogels tells me. “We try to avoid that with Amazon.com also, where any outage has significant financial impact. We try to deploy the same techniques around S3 and EC2.”

In compensation for the lack of an SLA, Amazon’s services are inexpensive. For example, S3 costs $0.15 per GB per month, and $0.20 per GB of data transferred, while EC2 server instances are $0.10 per instance per hour, for a server said to be equivalent to 1.7 GHz x86 processor with 1.75GB of RAM. There are no up-front costs. Is Amazon committed to low prices in future? “Yes, absolutely. For these things to make sense they need to be cheap,” says Vogels. “It’s the model of utility computing that will drive the development of these kinds of systems.”

Amazon’s web service APIs have changed little since their first release, though Vogels hints at some future enhancements. He mentions WebDAV as an interesting area. Another common request concerns authentication and billing. “Some developers would really like us to handle their own customer authentication, and that we would bill those customers directly for their use of S3.”

Currently you have to sign up as a developer, even if you are really the end user of an S3 application. “Are there still things missing? Yes. Will Amazon address those? Maybe. But we first need to make sure that these basic services, storage, CPU, communication, and queuing, operate solidly. There’s a lot to learn still,” says Vogels.

How Amazon implements SOA

Amazon’s internal development uses a model of small teams. Each team owns a small part of the business, which they build as a service. “There is a service oriented architecture internally, but we did that before service orientation was a buzzword,” says Vogels. Teams are autonomous, which means that within reason they choose their own preferred language and tools. “We expect our engineers to be smart enough to not go off and do really crazy things,” Vogels told us. “There is still a lot of Perl, but there isn’t that much that is completely built out of Perl. Then there is C++ and Java. There is also a growing Ruby community within Amazon. Then there are the containers, whether it is JBoss or Tomcat, or homegrown stuff.

“We trust our engineers to make the right choices. They also have to manage it. They are in the full cycle. So they have to limit their technology choices. That is a very good motivator to make the right choices.”

The existence of diverse technologies internally suggests that interoperability is an issue. “We have very strict guidelines around what a service is,” says Vogels. “There is a contract between clients and servers that describes network-independent access mechanisms. But it also clearly describes SLAs. At Amazon we’re very rigid about SLAs between clients and servers. So the interoperability is through standard mechanisms, some of them are home-grown protocols, others are standard REST and web services, but the description of it is fixed, meaning that services cannot change APIs one day after another, because there are quite some dependencies internally.”

It is ironic that Amazon relies on SLAs internally, but will not offer them to its customers. Nevertheless, there is a lot to like in Amazon’s affordable on-demand computing.

For more information on Amazon’s web services, see here. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like