Backup's easy; people have been doing it for years. All you need is a tape drive and software. On the other hand: backup is terrible; it takes too long and tapes fail.
Really though, how hard can it be to do this simple seemingly thing: protect your data against hardware failures and file deletion/corruption?
The ideal backup technology provides you with instant access to any piece of data after a file has been deleted or corrupted, and a server or a disk drive or storage array crashes. Of course, this assumes you have a limitless budget.
The simplest technically - the imaging procedure, mirroring all the disk drives to a second array - is not ideal since, if file corruption happens, the corrupted data is copied to the second array, over-writing the previously valued data. What is needed is the ability to roll back file or database data to a known good point in time and do so quickly.
Here we have two concepts involved which are widespread throughout the data protection world; recovery point objective (RPO) and recovery time objective (RTO). The ideal RPO is that point in time just before a data loss or corruption event. The ideal RTO is as close to zero as possible.
Generations of backup procedures, products and skills will become as obsolete as punched cards
Data recovery is best achieved if all writes to disk are captured and stored in a sequential fashion. This is the goal of Continuous Data Protection (CDP) products.
Examples are the Falconstor CDP product and Microsoft's System Center Data Protection Manager. The better products are application-aware and can roll back applications, such as SharePoint, to previous points in time in an application-consistent manner.
You can use traditional backup software for data recovery in these circumstances but it won't generally offer continuous data protection and you may lose the data since the last backup run.
Recovery from server and storage array loss is different. For servers, the typical arrangement is a failover setup using suppliers such as Neverfail and Double-Take.
This involves copying all storage writes on one server to a second one in a form of continuous data protection. This kind of software monitors the primary server state via a so-called heartbeat monitor, and will switch all access to that server, physical or virtual, and its applications to the secondary server, physical or virtual again, if the primary server fails.
What you can do to protect against a storage array failure is arrange to have a snapshot taken of the array's contents and have it copied, replicated, to a second array. Such replication can be synchronous, meaning fast and expensive, or asynchronous, meaning slower, cheaper and exposing you to some data loss if the array crashes before the last asynchronous replication has completed and fresh data has been written to its drives.
For the bulk of business data, be it enterprise or for small and medium businesses, the general protection routine is to use backup software and backup up data to disk, from where it can be recovered at disk speed and not slower tape speed.
You can combine continuous data protection and server/array crash protection with a single product such as InMage Scout software, available from HDS as well as the InMage channel.
As businesses migrate some or all of their data to the cloud, they will be less involved in the details of backup, focusing instead on service level agreements defining RPO and RTO details for their various classes of data. When that happens generations of backup procedures, products and skills will become as obsolete as paper tape and punched cards, and an awfully large number of people, while being a little sad, will breathe a huge sigh of relief. ®