Sponsored It’s less than a year since the Covid-19 pandemic forced technology leaders into crisis mode. Workforces worldwide moved out of their offices into their homes, and their data from physical servers to the cloud. The legacy of that rushed transition for how companies protect their most important asset - their data - is only now becoming clear.
Depending on your point of view, tech leaders are being forced to confront the gaps left by that hasty transition online and the sidelining of on-premises infrastructure and its associated disciplines. Or they have a golden opportunity to bring a level of discipline and automation to corporate data protection that was never possible with legacy platforms.
Whichever perspective you take, the need for organisations to build cyber resilience is clear. This can be thought of in two parts, says Steve Wood, director of sales engineering at Carbonite, a leading cloud backup vendor.
First, he says, businesses need to take “proactive steps to secure the core asset of their business, which is data.” At the same time, he continues, they have to consider “the reactive measures they can implement in order to get their business back up and running again.”
“It's really talking about business capabilities in terms of data protection, and data recovery… and that really does extend across the whole business,” says Wood.
A year ago, that “whole business” and its perimeter may have been easily definable. But with the changes of the last 12 months, Wood says: “We're not just talking about applications and data that sits inside a server inside a data centre. We're talking about the vast amount of data that also sits outside of the data centre, on devices that are unprotected and most at risk or susceptible to loss and damage.”
The problem is many organisations have an incomplete picture of the level of data protection they can expect from their cloud and SaaS providers. There may be an SLA associated with their cloud application or service provider, but this typically covers access to the service itself. The associated data? Well, that’s the customer’s problem.
Your data goes to the cloud, but the buck stops with you
This might seem counter-intuitive. But as Wood explains: “In the early 2000s. If you were a business and you are running Exchange on a server in your data centre, there was no question. The CIO is the one who would be blamed if that server was breached or that data was lost.”
Fast forward to 2021, and just because you’ve moved the application to the cloud, that doesn’t mean your liability has moved off-prem as well. “You may be acquiring the service from Microsoft, but it's your data and you are ultimately responsible for it.”
SaaS platforms such as Microsoft 365 offer data protection tools but they are quite specific, such as the recycle bin in Exchange Online. They don’t always lend themselves to panoply of technology and process a company needs to put in place to ensure they are able to withstand a punch to their data guts and get back up again.
The shift to SaaS also highlights how important but possibly overlooked unstructured data can be living in all sorts of places beyond the traditional perimeter of the organisation. That could be proposals and other documents residing in email, or documents that have been generated or evolved during collaboration sessions across platforms such as Teams.
It’s also worth considering that different generations of workers may have different attitudes when it comes to managing their data. To take just one example, workers who came of age in the pre-cloud era could be in the habit of storing everything on their laptop’s C drive. This raises the question of an endpoint backup requirement- and potential endpoint security issue.
Those whose careers began in the cloud era might be attuned to syncing and sharing docs across multiple devices, and be more comfortable dropping their work into, for example, OneDrive. Which is great, as long as they remember to do so. And remember the next day. And the one after that.
In both examples, the organisation relies on workers to do the right thing, and to keep doing it. So how do you rethink backup and data protection for the cloud? As Wood explains, there are five key pillars to building a bulletproof data protection strategy.
First, it should be automated. It shouldn’t rely on individual users remembering to move a file to a sync folder, or to kick off a backup at a given time. As Wood reminds us, “Backup is an insurance policy. You don't want to be constantly reminded that you have it…So we want it to happen in the background without anyone's interaction.”
Second, the 3-2-1 backup rule - that is, backups should be held separately from live data and you should have three copies of the data, two of which are on a different storage media - applies equally to the cloud. “You know, your backup solution should have an offsite capability by its very nature,” says Wood. However, while traditional IT had a clear idea of what constitutes offsite – tape held off-line, ideally sent to a remote location - what constitutes offsite when it comes to cloud? Once you’ve made the “emotional” decision to let your data go into the cloud, why would you want to haul it back on-prem to transfer it to tape?
The answer is you shouldn’t need to, but you should have the security of knowing that your backup provider is storing your data separately from your application platform, both logically separated, and physically separated. So for example, Carbonite customers will have a choice of geographically dispersed data centres to backup to.
Whatever destination you choose for your data, you should also be assured it is immutable. As Wood explains, “A backup is a snapshot of a point in time, and once that backup is taken, there should be no way in which it can be manipulated or modified.”
This is also important from a data governance point of view of course. If you have to retrieve data for regulatory or legal purposes, you want to ensure it is a high-fidelity version of the data at the time in question.
But, equally, if not more importantly, says Wood, “The bad guys are very, very smart.” Ransomware is one of the monsters that have haunted CIOs’ lockdown nightmares over the last year, and as Wood explains, malware creators are fully aware that a ransom demand is unlikely to result in a pay-out if a mark has a timely backup to restore from. So, often, ransomware will search and destroy an organisation’s backups first - if it can reach them - before getting to work on encrypting the live data.
This also highlights the importance of granularity when it comes to restore. If the point of data protection is to get systems up and running again, you need backups, and tools, that allow you to find and restore the appropriate data, not just ALL your data. That could mean choosing the appropriate point in time from which to restore your data, or the simple ability to restore selected files, or environments.
Hand in hand with this, is the final issue - flexible restore, which doesn’t assume a restore is to the same device or environment that the data was backed up from. This becomes important when it comes to restoring a workers’ data to a new or different device, whether it is a question of migrating a set of project files, or because a worker has left the organisation. Or, because application data needs to be restored to a new, upgraded or different application environment.
Draw your own map
That’s what the foundations look like. But what’s the starting point for putting them in place? “We would always recommend that a customer analyse where their data is,” says Wood. While much of your organisation’s data may well be in the cloud, some users will still be keeping important material on their hard drive, or in OneDrive, or selectively syncing with HQ, and your data protection strategy has to take all that into account.
Then they should consider the data within the organisation, and what application services they are using: “Maybe I’m running virtual machines in Azure, or AWS, or wherever? What data is there? Is it critical to my business? How am I protecting them?”
This includes questions as fundamental as “What happens if I forget to pay my bill on AWS and they shut down a virtual machine?” What happens when 100 workers rely on an application server that processes invoices, and it goes down.
At the other end, it’s worth considering the data implications of individual workers contending with the chaos of home working and home schooling. If an aging Windows 7 laptop ends up full of coffee, is the data backed up, and is it trivial to restore it to a Windows 10 machine? How do you keep that worker productive in the meantime?
Once you’ve mapped all your data, then you can start protecting it, and as we’ve seen, the principles behind data protection in the new cloud-based, remote world are not substantially different from the old world. But if the answer to the question “where is my data?” is “it’s in the cloud”, the solution is also there as well.
Sponsored by Carbonite|Webroot, an opentext company