Sponsored The database has grown up with the enterprise. From the mainframe to desktop, from client/server to web, use of databases has evolved and deployments multiplied as organisations have grown, diversified and embraced more sophisticated practices and new technology platforms.
Databases have evolved to become essential engines for running business-critical applications and workloads. They are also now recognized as key for unlocking the business value of data, among the most important assets in any business.
This journey has produced a complex database landscape. According to IDC, organisations now have - on average - 13 copies of every database and file in their possession. As organisations seek to unravel the hidden opportunities within their data, it is common to own multiple copies of analytical and transactional production databases. This is one of the reasons why more and more people are in the market for data marts, which are segments of an enterprise data warehouse partitioned by subject area, such as specific business units.
The picture is further complicated when you consider that more growth lies ahead. Much of the existing IT infrastructure concerns relational data and therefore fails to address the fastest growing data sector - unstructured datasets, including web, IoT social and mobile. The upshot is that there are even more systems to manage and optimise. Adding to the complexity is that some organisations are repatriating their data from the public cloud, bringing it back on premises for a variety of reasons including cost, data ownership and regulation. The Enterprise Cloud Index found 73 per cent have brought workloads back on premises, with private cloud an appealing option.
Back to basics
Database estates have become fragmented - broken up into silos of systems and data running on different and proprietary hardware and software overseen by multiple management platforms and teams. Tuning and securing has never been more difficult or time consuming, especially when you throw in hybrid cloud, where data might live on-premises or on remote systems. Instituting change and efficiently managing databases in this world is both difficult and costly.</p?
Back to the future?
Management of IT systems has always been a labour-intensive activity. Research firm IDC found that 78.9 per cent of IT spending in civilian agencies is being spent on improving and maintaining legacy systems. In the database context of today, that translates into activities such as running back-ups and applying upgrades, software patches and troubleshooting. But the effort required has greatly expanded in the digitalised world. This means tuning and optimising database performance in real time and ensuring availability levels that far surpass those considered “normal” in the old, enterprise world.
Next, and associated with this, is achieving performance and scale, both of which are much harder to achieve with fragmented data estates. Legacy storage architectures often pose additional headaches. For starters, traditional storage systems typically entail a significant capital expenditure outlay. From the outset, the architecture must accommodate over-provisioning in order to plan for future growth of performance and capacity requirements. Many storage vendors have made a lot of money out of over-provisioning, but the harsh reality for the customer is that they are paying over the odds for under-utilised storage systems. Add to that the need to store the vast volumes of unstructured data and one can easily see how storage costs soar out of control.
Furthermore, over-provisioning of a traditional storage system such as a SAN (storage area network) does not resolve the basic issue that database performance degrades as full capacity is finally reached. To get the database back up to speed, an expensive “forklift” hardware upgrade may be required. But attached to this is the risk that the upgrade can cause outages and is unable to scale as required. Finally, there’s availability and security. Nearly a third of data centres have suffered outages for purely practical and mundane reasons: power cuts, IT or software errors and so on, resulting in unplanned downtime.
It is difficult to design a failover and recovery architecture that ensures availability and performance when your data architecture is composed of all those legacy silos. Manual operations increase time and complexity, due to lack of standardisation. And they greatly increase the risk of human error.
As for cyber security, the growing complexity of the database estate makes it hugely difficult for DBAs to keep applying the database updates and patches that prevent attackers gaining a foothold.
Analyst Gartner predicts that until 2023 almost all security failures in the cloud will be the fault of customers. We’re already seeing that play out: one in three IT pros reckon their organisations have been breached as a result of an unpatched vulnerability. It was a failure to fix a known vulnerability in a Java framework that saw the personal data of 148 million customers held by credit ratings agency Equifax stolen by attackers in one of the largest security breaches in US history.
Time to virtualise
One way of overcoming this fragmentation and complexity is to turn to a modern infrastructure architecture that combines compute, storage and networking into one integrated stack that’s designed from the ground up around virtualisation. In a phrase: hyperconverged infrastructure (HCI).
Hyperconverged systems simplify management by virtualising everything. Compute and storage resources are pooled and allocated by a software layer as needed by the application. Customers also get the benefit of scale: because hyperconverged systems integrate everything inside the box, you can start with a cluster of nodes and add more as required.
Many hyperconverged platforms can be deployed on industry-standard server hardware of the customer’s choosing, removing the need for dedicated or proprietary servers.
HCI simplifies and automates many management tasks, enabling admins to control all of their databases through the same software layer. Virtualisation also allows you to pack a greater number of databases into each physical system, giving you much more bang for your buck. Hyperconverged infrastructure is proving popular. According to IDC, use of hyperconverged systems grew 23.7 per cent year on year in 2019, generating $1.8bn worth of sales. This amounted to 46.6 per cent of the overall converged systems market.
What, then, are the kinds of features you should seek out when bringing your enterprise database to a virtualised and hyperconverged infrastructure in the hybrid cloud? In a nutshell, you should look at features that serve in the core areas of performance, scalability and management.
A dedicated database server is generally optimised to produce high levels of transactions per minute (TPM) and I/O operations per second (IOPS). In a three-tiered infrastructure that centralises I/O from multiple systems into a storage array, the array controller’s I/O capacity can become a bottleneck to performance. With hyperconverged infrastructure, however, the storage is distributed among all the nodes in a cluster which means that, as the number of nodes increases, so does the total I/O capacity of the whole infrastructure, helping to accelerate database performance.
Benchmarks demonstrate that HCI delivers more than enough IOPS to satisfy the performance requirements of databases. Moreover, HCI vastly simplifies and accelerates database deployment, as well as the creation of database copies.
For databases run on hyperconverged systems, availability and scalability are largely built-in to the platform. The infrastructure can self-heal and recover from any failures. Non-disruptive upgrades, patching, and data resilience also mean less downtime. Leading HCI solutions also extensively leverage automation, which further increases uptime. And HCI has transformed scalability. Need more memory or more processor power? Just add it with a flick of a switch. Want more capacity? Simple - just add another node to the cluster, which brings more storage capacity and more compute power at a stroke and your database is ready to run some heavy duty workloads in minutes.
Hyperconverged infrastructure does not eradicate downtime, but it can reduce it and thereby minimize interruptions to the business. Servers, for example, that are supporting databases don’t need to come off-line to update software or add additional CPU or storage. And if a database needs patching, the cluster can take over to prevent any outage during the reboot.
Meet the management
Manual management has dominated traditional infrastructure, but it doesn’t scale for cloud. Hyperconverged infrastructure and cloud lets you run many versions of databases and applications and concentrate on your data, letting you see beyond the silos. However, as we’ve seen, organisations are already struggling under the weight of their existing databases - keeping up with version and security updates alone will be beyond most human DBAs.
A centralised and automated system of management lets you get a grip on this new infrastructure. Hyperconverged platforms should include the kinds of management systems that automate the application of software patches and security updates. Best of breed HCI features one-click deployment that supports not just the rollout of updates but formerly time-consuming, manual activities such as provisioning and cloning.
Many organisations are turning to Database-as-a-Service (DBaaS) for the same reasons they are turning to HCI. They want cloud-like agility, flexibility, and ease of operations. The operational simplicity of HCI dovetails with that of DBaaS, radically streamlining database operations through automated provisioning, patching, updating, replication, and other daily database management functions. DBaaS relieves administrators (DBAs) from spending days or weeks maintaining and repairing legacy database platforms that aren’t built for new data demands. Simplified one-click management operations consolidate disparate and siloed databases for easier management, scalability, faster data access, integrated backup and DR, and increased security across the entire estate.
The most advanced HCI solutions can seamlessly integrate DBaaS as another layer of the stack. For example, integration with HCI APIs enables capabilities such as copy data management, which simplifies database provisioning and lifecycle management of data. HCI APIs also allow zero-byte cloning and snapshots, which copy an existing database at any point in time without using up any data. This capability not only saves huge amounts of time otherwise wasted on cloning, but also expensive tier-1 storage for database copies.
HCI and DBaaS also help enterprises deliver continuous innovation. Automation and standardization through a database catalogue eliminates the human error associated with manual operations and prolonged troubleshooting of databases. DBAs and infrastructure teams can spend the majority of their time working on projects that benefit the lines of business, transforming IT into a proactive business partner.
Hyperconverged infrastructure has proved its value across a range of enterprise workloads, including the most complex and demanding enterprise scenarios. Independent benchmarks confirm that HCI matches and often exceeds the performance, stability, availability and efficiency of traditional infrastructures. Moreover, by virtue of the fact it’s open and software defined, HCI busts through complexity with an approach that’s centralised and highly automated.
By removing infrastructure silos and providing a centralised control plane for all database operations, HCI, especially when combined with Database-as-a-Service, greatly improves the efficiency of managing the database estate - regardless of the database engine.
Isn’t it time to bring this model to the database world and to tear down the silos?
Sponsored by Nutanix