If a college graduate can’t protect your data, you’re in trouble
To achieve resiliency and efficiency, aim for simplicity
Sponsored Feature Most organizations recognize their ability to operate, or simply survive, depends on their ability to protect their data from threats or catastrophes – and to recover it when the worst does come to pass.
That's a straightforward imperative. Yet it is incredibly complex to achieve, however much data is being stored. And for many, we're talking formidable amounts of information, with the average organization having 4.6PB of data stored on prem and 4.7PB stored in the cloud, according to the Enterprise Strategy Group (ESG).
The threat landscape around us we face today is equally formidable. ESG's recent research report – Cloud Data Protection Strategies at a Crossroads - found cybersecurity events to be the key cause of "recovery efforts", cited by 54 percent of organizations. By contrast, system failures were identified by 44 percent, with malicious deletion and accidental deletion mentioned by 37 percent and 35 percent respectively.
Ransomware or cyberattacks are the biggest data protection concern amongst organizations, cited by 29 percent, while 17 percent highlighted insider threats as their biggest concern.
It's a truism that most, if not all, organizations are likely to suffer some form of data breach at some point. So, it's important for technology and security leaders to think about both data protection and resiliency in terms of their ability to limit the impact and speed recovery when a breach does occur.
But while companies have a clear view of the threats, their view of their data and the measures they have in place to protect that data, and restore it when necessary, is not always so clear. ESG's research shows that 58 percent of organizations are not just using "the cloud" but are using three or more clouds. Moreover, these are supporting a multiplicity of uses and workloads, ranging from "traditional" infrastructure such as bare metal servers or VMs, and modern cloud native and Kubernetes-based infrastructure.
So it's perhaps not a surprise then that 61 percent of respondents said cloud data protection had a "moderate time impact" on their daily operations, while 21 per cent said it had a major time impact.
Even with all that effort, only 11 percent said they had been able to recover all their data in the wake of each cloud recovery event they had faced. And significantly, the proportion of on-prem only respondents who'd fully succeeded in recovering all their data was the same.
It's about more than protection
These issues present tech leaders with a challenge when it comes to delivering the resiliency they need as efficiently as possible. Three quarters of organizations have three or more full time staffers, or equivalent, devoted simply to protecting data in the cloud.
As Dell senior product marketing consultant, Colm Keegan explains, that's a major commitment of expertise in a tech environment that continues to be plagued by skill shortages.
What's more 70 percent of organizations reported having to use different data protection tools to protect different cloud environments. This will inevitably impose an even heavier cognitive load on the staffers tasked with juggling those multiple environments.
It seems logical that when data protection teams are forced to juggle multiple environments, tools, and interfaces, it is more likely there will be blind spots in their view of their data assets overall. And these could present gaps into which attackers could work their way in.
So, faced with such a complex internal and external landscape, how should tech leaders be rethinking their data strategy?
Data protection is the starting point for resilience. Storage admins need to be able to ensure that they have a protected "copy of last resort" of their data. As Keegan puts it, "You're going to take a subset of your most critical data, and put as many moats around that as you can."
This includes ensuring immutability, so that the data can't be deleted or altered once it's been written. The implication when it comes to ransomware is clear. With Dell's offerings, admins can take the next step, says Keegan, and "can deploy a protected copy into an isolated digital vault, that's completely separate from their production network."
But companies should also consider insider threats. If an insider is able to bypass the processes ensuring immutability, says Keegan, "Then all bets are off." So, access control becomes crucial, to ensure one malicious – or incompetent – employee can't undermine the whole system. "The analogy is it takes two keys to launch the missiles," says Keegan. Meaning that one person shouldn't have access to the proverbial keys to the kingdom from a data perspective.
This will address the core task of data protection. But ensuring efficiency and resiliency requires a broader view. Keegan adds that admins often don't have a clear line of sight to every piece of infrastructure that's out there.
Thanks to DevOps practices, developers will spin up infrastructure to deliver a project without, perhaps, paying too much attention to the impact on the organization's broader data protection policy.
Keegan says that Dell's own field CTOs will ask admins, "Are your developers using public cloud? And the answer usually is 'Oh, I'm sure they are, but I have no idea what they're doing'."
Don't get in the way
Remedying this requires data protection solutions that can seamlessly handle traditional workloads – bare metal servers and virtual machines – as well as modern cloud native workloads. This means the ability to support and protect Kubernetes containers and cloud IaaS, PaaS and SaaS workloads.
But technology leaders also need to consider "efficiency" in this context – ie, not unduly impeding development and deployment of new applications and workloads.
Developers are going to do what developers are going to do, says Keegan. What data protection teams can do is understand developers' motivations and processes. From there, they can offer simple self-service capabilities so developers can deploy the infrastructure they need, while also putting in place policies and tooling to protect their development data – which Keegan points out will often be sensitive data in its own right.
"Developers are looking for things like open API's that they can embed in their code that calls up infrastructure in the cloud, to automatically assign a protection policy whenever a resource gets deployed."
If a task is simple, it's more likely to get done. And if it makes sense for data protection to be simple for developers and other staffers to incorporate, it really should be as simple as possible for storage admins too.
As Keegan points out, the broader skills shortage is a given. "That's why it's so important for things to be simple, because you need those skilled folks focused on the things that really, really matter – like helping the business innovate."
Most data protection management should be able to be handled by a junior admin, he says, with senior, experienced personnel only getting involved when there's a problem. A real problem. The specialized skills gap too is still a major challenge for many organizations. So while it's a nice to have that you can free up resources to do other things, equally it's a need to have for resources that can manage data security.
Efficiency is crucial when it comes to the people side of the equation. But leaders should also consider efficiency from a technology point of view. There's the data storage infrastructure footprint to consider of course. But compute, power, and cooling also come into play. The smaller the storage footprint, the smaller the cost of all of these too. And there's efficiency when it comes to recovery too. So making the right choice – for example, a vendor's capabilities around deduplication - all play a part.
Ultimately the choice of partner(s) becomes crucial, not just because of the core technology, but because of their ability – and willingness – to step up when something like a disaster really does strike. And it may be that opting for a range of point solutions increases the challenge here as well.
"Who's going to be able to help you, end to end," says Keegan. "So when you do start running into a snag, you're not hanging up the phone with one vendor picking up the phone to another and trying to arbitrate between the two."
So, from raw disk space, the number of tools your administrators are using, the number of compute cycles it takes to backup and restore data, and the number of outside parties to deal with, storage leaders should be considering how they keep their storage footprint as lean as possible even as they cover ever more platforms and workloads.
Ultimately as Keegan explains, "Data doesn't shrink, it only grows, and consumption, in the cloud, can start to really get significant over time." Given the existing amount of data companies already have under management, and the spread of locations it exists in, that sounds like a daunting challenge.
So, for technology leaders, ensuring their organization's long-term future means thinking not just about cyber-security but about data protection and resilience. That will likely mean using fewer, but better, more tightly integrated tools to manage their data protection processes and their recovery procedures. Sourcing that tooling and expertise from a single vendor will make life simpler still, particularly when they face a real-life recovery situation.
Those organizations who embed simplicity and efficiency in their data protection strategy will be best placed to unleash innovation within their broader organization, and simultaneously navigate an increasingly hostile threat landscape.
Sponsored by Dell.