A server apocalypse can come in different shapes and sizes. Be prepared

Plan Bs, from corrupted data to pandemics

I run into the same misconceptions about business continuity on an almost daily basis. “We’ve already got backups, so why would we need to have a disaster recovery site as well?,” comes up with alarming regularity, as does: “We spent tens of thousands on a disaster recovery site, so why did we have that four-minute outage – why didn’t we switch over to DR and get our money’s worth?”

It’s nobody’s fault, really. For all the efforts of technical folk to explain and educate, and the efforts of the business to listen and learn, it’s easy to see why it can get confusing and ultimately lost in translation.

A lot of the terminology is the same (or at least, very similar) and the same words might mean different things, depending on who says them and in what context.


The fundamental principle of the backup is straightforward. A backup is a copy of a fixed point in time, to which you can return in the event of an issue.

An application or system has a snapshot of its current state written out to a storage, and that is retained for use in case of emergency. That might be a copy of an application or files, or it could be something more complex like a set of databases or an entire server, but the key facts remain: you need somewhere to store it, and you need somewhere to restore it.

Whether you’re writing backups regularly on a schedule, or taking them manually prior to making changes, you’ll want to keep it somewhere safe. For short-term use that’s probably onto disk, but for long-term retention that might be onto tape or other media.

Then something happens, and that means your backups are required. Perhaps data gets corrupted or lost, and you need to wind the clock back. Maybe a server has died, and you need to perform a bare-metal restore.

Whatever the case, you’ll be restoring your backups onto a server or system, and it is for this reason that backups alone simply aren’t enough as a method of business continuity. It also takes time, and lots of it.

You can have the most rigorous backups in the world, but if you don’t have somewhere to restore them (i.e. a disaster recovery site) then you may as well not have them at all.


Redundancy is a relatively straightforward concept. Servers have redundant power supplies, to ensure constant delivery in the event of a failure. Further up the chain, you’ll want to be feeding your redundant power supplies from diverse power feeds in case one fails; as another layer of redundancy.

Your power will be protected by uninterruptible power supplies and generators, and – you guessed it – you’ll want to have more than one of each of these to ensure continued power delivery in the event of a failure.

You’ve probably heard of components like generators being referred to as having N+1 or 2N coverage, and these are measures of redundancy. In an N+1 scenario, for each component (N) you would always maintain one more in case of failure: if you have one generator, then you must always have a second spare; if your core number of generators increases to three, then you must always have a fourth available.

In 2N, you would always have the same number of components again: with one generator, then you would still only have one spare; but with three core generators you would have three spare for redundancy, giving a total of six generators.

It’s worth mentioning that redundancy is not always an instantaneous process. In some instances (such as server power supplies or UPS systems) the redundant components will already be operating at all times, and the failover capacity is maintained within the total headroom of the operating hardware.

In the case of generators, your redundant devices will likely not be running and may take a little time to start up – it is for this reason that generators have warm oil pumped through their system even when switched off, to reduce startup delay in case they are suddenly required.

Next page: High Availability

Similar topics

Other stories you might like

  • FTC signals crackdown on ed-tech harvesting kid's data
    Trade watchdog, and President, reminds that COPPA can ban ya

    The US Federal Trade Commission on Thursday said it intends to take action against educational technology companies that unlawfully collect data from children using online educational services.

    In a policy statement, the agency said, "Children should not have to needlessly hand over their data and forfeit their privacy in order to do their schoolwork or participate in remote learning, especially given the wide and increasing adoption of ed tech tools."

    The agency says it will scrutinize educational service providers to ensure that they are meeting their legal obligations under COPPA, the Children's Online Privacy Protection Act.

    Continue reading
  • Mysterious firm seeks to buy majority stake in Arm China
    Chinese joint venture's ousted CEO tries to hang on - who will get control?

    The saga surrounding Arm's joint venture in China just took another intriguing turn: a mysterious firm named Lotcap Group claims it has signed a letter of intent to buy a 51 percent stake in Arm China from existing investors in the country.

    In a Chinese-language press release posted Wednesday, Lotcap said it has formed a subsidiary, Lotcap Fund, to buy a majority stake in the joint venture. However, reporting by one newspaper suggested that the investment firm still needs the approval of one significant investor to gain 51 percent control of Arm China.

    The development comes a couple of weeks after Arm China said that its former CEO, Allen Wu, was refusing once again to step down from his position, despite the company's board voting in late April to replace Wu with two co-chief executives. SoftBank Group, which owns 49 percent of the Chinese venture, has been trying to unentangle Arm China from Wu as the Japanese tech investment giant plans for an initial public offering of the British parent company.

    Continue reading
  • SmartNICs power the cloud, are enterprise datacenters next?
    High pricing, lack of software make smartNICs a tough sell, despite offload potential

    SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

    SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

    Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

    Continue reading

Biting the hand that feeds IT © 1998–2022