This article is more than 1 year old

You need to shift millions of repos to AWS without any downtime. How? Bitbucket engineering chief tells all

And you need to do it during a pandemic while working from home

How does one rebuild an aeroplane, at 40,000 feet, while it is still full of passengers? That was the question posed by Daniel Tao, head of engineering at Bitbucket, while discussing the source shack's move to AWS data centres.

The outfit found itself having to migrate 50 million repositories out of parent company Atlassian's bit barns and shunt them across the billion or so daily interactions into Amazon's cloud without falling into a heap. Oh, and it had to sort it out during a global health pandemic.

Bitbucket Cloud has been around for over 10 years and, along with an on-premises version, is Atlassian's take on source wrangling. It was augmented back in April with Open DevOps, built around Jira, Confluence and Opsgenie as well as Bitbucket, but, unlike the rest of the Atlassian line, had remained firmly in its own data centres.

The service was bedevilled by outages in 2019 and storage woes in 2018. A change was due. And not just the yanking of support for Mercurial in 2020.

"Its architecture has always assumed it would be in a data centre," said Tao. "And so we had to really redo key aspects of Bitbucket's architecture, and rebuild it kind of in a new way in a cloud environment, while still operating our data centres."

Rival DevOps outfit GitLab memorably shunted its own data to Google's cloud in 2019, shortly after GitHub's acquisition by Microsoft.

As for Bitbucket's migration, it took 18 months from start to finish, with the final push happening over three hours at the end of August. The pandemic complicated things as the team faced up to "the same curveballs that were thrown at every tech company of just learning how to work remotely and coming up with new processes and rituals around that," according to Tao.

It also had to ensure sufficient capacity had been purchased and account for any last-minute issues in its plans.

From a technical standpoint, the greatest challenge was one of bandwidth as customer data was first replicated from Atlassian's servers to AWS's. "Every time a customer pushed to their Git repository, every time a customer left a comment on a pull request, and so forth, all that data was being replicated in real time with a lag of milliseconds into our new environment in AWS," said Tao.

The downtime at the end of the August was then only required to point Bitbucket's services at this new "source of truth."

"The vast majority of our customers wouldn't have noticed anything," he claimed.

Which is all well and good from a technical standpoint, but some customers might be less than pleased at having their data moved to AWS.

"In terms of data locality," said Robert Krohn, head of Agile and DevOps engineering at Atlassian. "Our data centres are in the same sort of region, so we didn't have to inform our customers that we were moving stuff around."

That might raise an eyebrow or two, although Krohn added: "Some of the big customers we did talk to… but in general, that wasn't a constraint."

Although a leap into the land of Bezos might trigger an ulcer or two in some developers, if one has already signed up for Atlassian's cloud wares, the odds are that a chunk of one's data is already in AWS's data centres. Atlassian's internal Platform-as-a-Service, Micros, runs atop AWS and hosts the majority of the company's cloud products. This includes (after a few tweaks) Bitbucket.

"It was the final boss," remarked Tao.

And despite the move to AWS, the throat to choke if things go wrong is still Atlassian's.

Unsurprisingly, Tao and Krohn were keen to highlight the improvements in performance and scalability after migration. "Behind the scenes," said Tao, "the amount of incidents has dropped to zero." A glance at the company's status page shows just the one problem, which cropped up around pipelines earlier this week, since August's final switchover.

However, there are still those pesky data centres, now redundant and stuffed full of customer data, to deal with. "We have to take the data that is on those hard drives very seriously," said Krohn, "and we're going through a process of securely destroying them in a certified way."

But what of the remaining hardware? "We've had parties where we've sent people and said, oh, go and cut the cables," said Krohn.

Because, no matter how carefully you plan your migration and what precautions you have to take, let's face it: there ain't no party like a DC-decommissioning party. ®

More about

TIP US OFF

Send us news


Other stories you might like