This article is more than 1 year old
Azure promises to keep your backups safe and snug for up to 10 years
Auto-failover for Azure SQL Database when things wobble
Microsoft continued its drive to encourage SQL Server customers to move their precious data to its cloudy towers with the announcement that long-term retention and automatic failover had finally hit the big time.
Long Term Retention
The preview, announced back in October 2016, was designed to extend the retention period of Azure SQL Database backups from 35 days to up to 10 years. Back then, customers had to provision their own Backup Service Vault. As of April this year, Microsoft switched to Azure Blob storage, most likely after someone realised that having to juggle multiple storage types would be a bit of a faff for users.
Azure SQL Databases already automatically creates full and differential backups, but depending on the service tier these can vanish in as little as a week, meaning users must come up with some way of either getting these off the cloud or shunted into some other storage.
Enter Long Term Retention (LTR), which went to General Availability this week and can fling the existing backups into Blob storage for a user-defined amount of time on a weekly, monthly or annual basis. Almost like someone spent two years writing a PowerShell script to, er, copy files from one place to another.
Of course, Microsoft would say it's a bit more complicated than that.
Businesses are increasingly subject to retention regulations, with the UK's tax collectors, for example, insisting that business records relating to VAT need to be kept for a minimum of six years.
The 10 years given by Microsoft may not be sufficient. The UK's Medicines and Healthcare product Regulatory Agency (MHRA) insists that trial data be kept for at least 15 years.
And as with any long-term data storage solution, assuming regulators can be persuaded that leaving data in Microsoft's cloud is safer than a fireproof safe, consideration must also be given to maintaining a system able to restore that data. With support life cycles clocking in at around 10 years (SQL Server 2012, for example, runs out of support in 2022), just keeping hold of the backups is only part of the challenge users face.
Failing with style
Replicating data over geographic regions has long been a handy function of cloud-based databases, allowing for reporting from secondary databases and a feeling of security that if the cloud drops out of the sky (as it memorably did in Ireland earlier this year), users could failover to an unaffected region.
Unfortunately, for Azure at least, triggering this failover is usually anything but automatic, requiring tinkering to bring things back up. While Microsoft has had geo-replication since 2014, transparent, automatic failover has been conspicuous by its absence, outside of preview.
The Amazon Relational Database Service (RDS), which supports SQL Server instances, already features a High Availability (HA) solution in the form of Multi-AZ deployments, which automatically provisions a synchronous standby replica in a different Availability Zone. In the event of an unplanned outage, RDS will switch to the standby replica within a minute or two, although existing app connections will need to be re-established.
Finally making it to full General Availability this week (despite a 2017 blog post announcing it prematurely), Azure's new tooling gives customers manual or automatic failover functionality for a database group.
Microsoft reckons the failover process will be transparent to users; connection endpoints shouldn't change and fiddling with SQL connection strings won't be required. Users worried about data loss can specify a grace period to give Azure time to finish data synchronisation or mitigate whatever is causing the outage.
Existing geo-replication users will be delighted to know the auto-failover database group function is free. ®