Microsoft announced general availability of its Azure Data Sync tool this week, which allows data to be synchronised between cloudy Azure SQL databases and on-premises servers.
Just in time for Azure to go for a good long lie down, in Northern Europe at least.
Microsoft Azure Europe embraced the other GDPR: Generally Down, Possibly RecoveringREAD MORE
The technology, which has been in preview for a while, allows administrators to configure bidirectional or unidirectional synchronisation for their databases, theoretically allowing a copy of the data in each Azure region or locally.
By pointing applications at their local copy of the database, Microsoft reckons that access time and responsiveness will be improved significantly and latency and connection failures reduced.
Something users of the Northern Europe Azure data centres would doubtless have appreciated last night.
So far so good. However, having dispensed with industry buzzwords such as "hybridisation", the methodology used is not new. It looks to be more suited to databases that, frankly, don't get changed an awful lot.
The basis of the technology is a central hub database, which must be in Azure, and a bunch of member SQL databases, which could also be hosted in Azure or lurk on premises.
An administrator configures those databases as a Sync Group, specifying the direction of data between member and hub (either uni or bidirectional). The databases are then sprayed with Insert, Update and Delete triggers, which dump data changes into a table that eventually finds its way to the Hub to be downloaded to the other members.
SQL Server greybeards will be stroking their facial hair thoughtfully at the familiarity of the process.
Conflicts are handled by either a Hub-Wins method (the Hub will overwrite data in the member) or vice versa with Member-Wins. In the latter scenario, with multiple members, the final value depends on which member syncs first.
The limitations are also wide and varying. Data types such as TimeStamp are not supported and encrypted columns could present a challenge. And you can forget all about maintaining transactional consistency (although Microsoft "guarantees that all changes are made eventually and that Data Sync does not cause data loss"). FileStream is also an absolute no-no.
Administrators also need to consider the impact on database performance of all those extra triggers as well as the potential cost of data flying in and out of Azure.
To be fair, Microsoft is clear that users should not use this technology for disaster recovery or scaling up Azure workloads, nor is it intended to replace the Azure Database Migration Service, to shift on-premises SQL to Redmond's cloud. The software maker sees it filling a niche for customers who want an up-to-date copy of the their data for reporting and analytics purposes.
Administrators reeling from this morning's outage will be taking a good hard look at how their solutions have been architected, and while the process may be a little archaic and limited, more support for distributed data will be very welcome indeed. ®