This article is more than 1 year old

AWS experiment with Lambda in cloudless configuration fails to impress

Service took a long nap in the notorious EAST-1 region

Amazon Web Services' notoriously gaffe-prone US-EAST-1 service delivered another day of disruptions on Tuesday when its local serverless Lambda service struck trouble for three and a half hours.

As detailed on an AWS status update, at 12:08 PM PDT (1908 UTC) on June 13, the cloud colossus noticed "increased error rates and latencies in the US-EAST-1 Region."

Eleven minutes later came news that "AWS Lambda function invocation is experiencing elevated error rates."

A mere seven minutes later, AWS proclaimed it had identified the problem and commenced the work needed to fix it.

But as Amazonian cloud admins labored to sort out the situation, the outfit reported "increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region."

Those issues left even the AWS console unavailable to some, leading to advice that users should seek out that vital cloud management tool "using a region-specific endpoint."

Organizations including Delta Airlines and Burger King were impacted by the incident, with the former struggling to sell tickets online and the latter's mobile app losing its sizzle.

Whatever AWS did to restore services worked fast: by 13:48 (2048 UTC) the cloudy outfit reported “We are beginning to see an improvement in the Lambda function error rates.”

And at the same time came a basic incident report, blaming the mess on "an issue with a subsystem responsible for capacity management for AWS Lambda, which caused errors directly for customers (including through API Gateway) and indirectly through the use by other AWS services."

At 2100 UTC AWS declared "Many AWS services are now fully recovered and marked Resolved on this event" – but it wasn’t until 2242 UTC that it sounded the all-clear, stating that its services had all been restored to full operation, and all backlogs cleared.

Clouds aren't supposed to break like that. And issues with a single service aren't supposed to have wide impact across a cloud operator's services.

US-EAST-1 is AWS's oldest region, and its flakiest. We've covered outages or glitches in the Virginia-based bit barns in September 2021, December 2021, and November 2020. Analyst firm Gartner has rated the region as a weak point for the cloudy market leader.

Despite the region's poor performance, AWS has pledged to expand it, adding multiple datacenters as part of a $35 billion expansion plan. ®

More about

TIP US OFF

Send us news


Other stories you might like