This article is more than 1 year old

Your organization needs digital transformation. But you’ll need to transform your storage infrastructure first

Not just a growth strategy. It’s also a survival strategy

Sponsored If you’re part of an organization with any sort of an eye on the future, digital transformation is already part of your colleagues’ lives. It is certainly part of your competitors’. But is it part of your approach to storage?

Let’s be clear what we mean by digital transformation. At its most basic, digital transformation describes a change from analogue ways of doing business to digital processes. Uber, for example, replaces the cab line at the airport with a virtual line of taxis – and customers – in the cloud. Amazon continually replaces elements of the manual pack and ship processes in its highly automated, distributed warehouses, while exploiting customer data and analytics to refine its recommendations. In both cases manual bottlenecks are not just eased, but eliminated altogether.

Mirroring this, various technology functions within organisations are undergoing their own forms of digital transformation.

Your counterparts in development will be talking about DevOps or Agile or Continuous Delivery. Whichever banner they have grouped under, you can assume they are intent on breaking down traditional monolithic applications into microservices and containerized apps, which can be deployed at pace. If they haven’t put real-time analytics, machine learning and AI into production you should be wondering why.

Cloud native will also be part of the conversation. The precise definition might be up for debate, but you can assume your developers expect to be able to deploy and run scalable applications and to be able to access the resources to do so, including storage, quickly and easily, whether on-premises, a private cloud, or in the public cloud. Or clouds. All of this is probably accompanied by a cultural drive to break down traditional boundaries within the technology organization and beyond, all with the aim of delivering to the customer, faster.

Your organization’s data infrastructure is clearly crucial to all of this. Machine learning and AI both require and create vast amounts of data, while analytics relies on a steady diet of often historical data. And that’s before we consider edge devices, embedded systems, and industrial systems, all of which are potentially throwing data at your infrastructure. And that data must be moved somewhere so it can be turned into insight or knowledge.

So for digital transformation to even be possible, your storage infrastructure has to be up to the task.

This is much more than a question of raw capacity and speed. Digital transformation also has implications for storage management and policies, deployment and provisioning. New ways of development and deployment mean new expectations from the rest of the organisation, which will shape how you procure and deploy storage in the years ahead. Perhaps it’s easier to contemplate a number of key questions. First, if it hasn’t already, the cloud is likely to become part of your infrastructure. When it does, how will you support data mobility? Then, how are you going to support data mobility when multiple clouds become part of your infrastructure, as they surely will? And lastly, how do you maintain control, confidence and peace of mind around the security and protection of your data - and your customers’ data - as all of this takes place?

Is your data infrastructure geared up to meet these demands? Does it allow data to flow easily from where it is generated to where it is needed?

Or does an accumulation of storage bottlenecks threaten to choke off your organisation’s digital transformation effort?

Some of the bottlenecks will be obvious. Older disk-based devices will clearly not turn in the same performance as newer flash-based arrays and appliances. Depending on the architectures at your disposal, adding additional capacity or types of storage might be a challenge. You might have invested in converged systems but find they are not flexible enough when it comes to changing organizational needs. Your focus this year might be around providing block-based infrastructure for data bases, but if microservices become more important to your organisation next year, you may need to provide more object-based storage. Other bottlenecks are less obvious, because they involve people and processes.

Data movement, whether for tiering purposes or data protection, might be theoretically straightforward across the range of platforms and architectures you have, but may in practice be more involved, and more fiddly, than you’d like.

You might aspire to seamlessly manage in-house storage and tap the cloud as needed, but find you are having to get into the nuts and bolts of APIs and being forced to do migrations manually. And data doesn’t exist purely in and of itself - context is also important. If you’re switching from one environment to another to manage data, you need the associated metadata to be available wherever it lives. Similarly, you need an easy way to create and enforce policy across your core storage and remote locations, particularly if they span a variety of vendors.

Beyond the data center, your developers or data scientists need to be able to access the resources they need quickly and easily - not wait for an admin to do this, once a request has worked through the queue. You might not even know there’s a bottleneck because your most demanding users have already turned to the cloud.

That’s all assuming the storage you need is on hand. What you have on tap today may be a result of a purchasing decision made by someone else three years ago based on some highly speculative forecasts on what might be needed in future.

In the meantime, both storage technology and the challenges facing you may have changed, whether it’s a sudden need to analyse vast amounts of genomic data or to support virtual desktop infrastructure and enforce data security for a newly remote workforce. When all these factors are considered, it’s clear that the ability to manage your storage through a single platform is essential if your infrastructure is going to keep pace with the demands of developers, data analysts and the rest of the business.

What do you want? And what do you need?

Part of the answer is what could broadly be described as hyperconverged infrastructure - with the monolithic architectures of the past being replaced by smaller, more flexible storage building blocks. This gives you more flexibility when it comes to the type of storage you have on tap.

But to really take advantage of this, you will need to embrace software defined infrastructure. This involves the abstraction of the management and provision of storage services away from the underlying hardware, via a hypervisor or virtualization layer. So, the intelligence governing the infrastructure is largely in software. The hardware beneath this management layer can then theoretically be anything, anywhere, including the cloud. Management of all that potentially infinite resource though, remains in one place, one interface.

That’s the theory anyway, but as you contemplate the potential benefits, they should also become a check list for exactly what your chosen provider should offer.

First off, the hardware equation should become much simpler. Let’s assume your existing and future hardware options are compatible with - and ideally certified for - your chosen management layer. Depending on the system you choose, you no longer have to worry about physical controllers, because they have been replaced by virtual controllers, which no longer require highly disruptive software upgrades. Configuring or upgrading these becomes a rolling software upgrade, rather than a nail-biting job squeezed into a small window when you can afford to take the entire system down.

Deployment of additional storage capacity should be as straightforward as possible. If you’re acquiring an NVMe all-flash array, that’s simply another resource that you can make available. If you decide that future storage acquisitions will come in the shape of cloud storage, fine, that deployment should be transparent to the storage administrator, and the infrastructure should be invisible to end users.

As well as the multiple panes of glass previously needed to manage disparate storage systems, you should also have amalgamated the multiple processes and policies running beneath them. For example, instead of setting separate snapshotting or replication policies for each hardware stack or site, your single management layer should allow you to set and automate common data protection policies right across your infrastructure, on-prem, remote, or in the cloud.

A set menu for self service

Likewise, you should expect to eliminate the problem of software upgrades and patch management across different software and hardware stacks, across different geographies. When it comes to troubleshooting, you’re no longer looking for problems that have fallen through the cracks between different systems. Provisioning should become much more straightforward. Wherever and whatever the storage resource is, you can expect a single, automated process for deploying it, all from that single interface.

You can consider whether to offer this as a self-service option. Developers who need resources quickly will no longer have to put in a request, wait for their place in the queue, and deal with an admin, before realizing that actually, what they really need is about 50 per cent more capacity, and it needs to be block rather than file storage. Their workflow will do just that, flow, while infrastructure can automatically scale up and down according to application needs, whether in dev or production.

These are all elements that should make your team’s job easier on a day to day basis. What you do with that extra time is between you and them.

On a more strategic level, once you’ve made the right decision on your core storage management platform, future investments in infrastructure should become much more predictable. Or, looked at another way, infrastructure investments can be made with the unpredictability of modern organisations in mind. You’re not necessarily locked into that three-year guess and plan cycle for on-prem infrastructure which starts depreciating before you’re even using it, leaving you having to retool, or renegotiate licenses, as your needs change.

The point of this transformation is to make the infrastructure operate with and support the agility that your business needs, to match the new workflows and development paradigms, and to respond to customer and workforce demands.

That might mean the ability to deliver a system that can match resources to developers’ needs as they move to continuous delivery, or to be able to scale up resources in response to an expected spike in customers around holiday promotions, then scale back when things quiet down in January.

But it can also mean the ability to cope with the fact that your workforce has become remote overnight, and that whether or not they return to their offices in time, you need to provide them with the services they need now. So, it’s not just a growth strategy. It’s also a survival strategy. That is, if your storage infrastructure allows you to do this.

Sponsored by Nutanix

More about

TIP US OFF

Send us news