This article is more than 1 year old
Cloud backups: Where's my get out of jail card?
Quis custodiet ipsos custodes?
Double or quits
This layered approach is important. If your data doesn't exist in two locations then it doesn't exist. Just as it isn't good enough to simply trust my data to two different Amazon "zones", putting a Unitrends appliance in my datacentre and using that as the single point of backup isn't good enough either.
A fire, a flood or some jerk driving a truck into the side of the building could take out the working set of data and the backups stored on the local appliance.
Backing up to a local appliance and then mirroring those backups to another location – one you own, a managed service provider you work with or a public cloud provider – is pretty much the only way to go.
This is easy to understand in the context of workloads running on your own infrastructure, but how do you back up public cloud data?
And isn't avoiding exactly this sort of tomfoolery supposed to be the big selling point of the public cloud in the first place?
The truth is that most of the online providers are really quite crap at the sorts of versioned backups discussed above. Salesforce is a great example.
Salesforce uses tape backup and ensures that your data is backed up on average once a day. That sounds reasonable until you get to the part where restoring that data from backup is a minimum of $10,000.
When I consider the above in light of the high cost of Salesforce my brain simply kernel panics and reboots. Salesforce is no longer a phenomenal deal, as everyone keeps trying to convince me, and starts looking like some very typical buck-passer in new as-a-service clothing.
Saleforce's recommended solution to this problem – and you will get the same sort of answer all across the industry – is to purchase backups-as-a-service (BaaS) from a partner company. So to use that software-as-a-service (Saas) application in anything like a safe-enough-for-business-use fashion you will incur additional costs to back it up.
To kick you while you are down, those costs probably use a different costing model from the application or service you are trying to back up.
The backup service is probably per gigabyte of data with a trickle charge for bandwidth and separate charges if you ever need to actually recover something. Explaining the costing model to the bean counters suddenly takes more than one slide in the presentation.
Picking on Salesforce is easy, but it is an almost universal problem. I don't exactly have versioned access to my Gmail or Office 365 email. For that you need to turn to Live Office (now owned by Symantec) or similar applications. (I used Spanning with Gmail before I started my migration off US cloud services.)
When you get your hair cut you don't have to bring in a third-party hair removal service to sweep the floor afterwards
This whole model is dangerous. IT people trained in the dark arts of the tinfoil hat look at any network and start pulling it apart for single points of failure.
When you go to a barber you expect to pay the tithe and get your hair cut; you don't have to bring in a third-party hair removal service to sweep the floor afterwards.
Technologists are good at technology but have a tendency to simply say that problems outside their core expertise are not their problem. Whether the issue they are trying to pass the buck on is security or backups, this leads to trust issues. Over time, after enough people have lost data, confidence in the entire concept of the public cloud could well erode.