The world received an unpleasant reminder of what it's like to live without the cloud on Thursday, after Amazon Web Services' Simple Storage Service fluttered for an hour or so.
The incident invoked memories of the S3 outage in March 2017 that caused interruptions to plenty of web services and apps, sparking much rending of garments and gnashing of teeth as the fact of AWS being fallible worked its way into the minds of the faithful.
Amazon S3-izure cause: Half the web vanished because an AWS bod fat-fingered a commandREAD MORE
The US-EAST-1 region that caused so much trouble in March was again the culprit on Thursday, as at 11:58AM on Thursday AWS reported “increased error rates” on S3. By 12:20 the company admitted “We can confirm that some customers are receiving throttling errors accessing S3.” By 12:38 the problem was identified and the fix commenced, error rates fell by 12:49 and at 1:05 AWS sounded the all-clear, confident that errors had ceased nine minutes previously.
AWS CodeCommit, Elastic Beanstalk and Storage Gateway all wobbled, too, and all in the same North Virginia data centre.
While this incident was nowhere near as bad as March's ApocalypS3, users were predictably grumpy about this latest S3izure:
I'm glad we banged our head against a wall for an hour, until our issue was magically resolved. Turns out S3 was down. Great job, Amazon.— Dave Hitchings (@davehitchings) September 14, 2017
AWS hasn't revealed the cause of the problem. If it does we'll update this story. For now, another Tweet on the incident looks like good advice.
Looks like S3 is down and it's a problem in N. Virginia. My 2 cents: Never deploy in us-east-1. Reliability is much better IMO elsewhere.— Ashwin Shankar (@shankspeaks) September 14, 2017