Because the server room is certainly no place for pets

Machine hoarders burn cash – it's time to virtualise your legacy IT

Scared to look in the box

Restores for legacy system often involve a day of reinstalling software, manually moving tapes around and sacrificing a goat to the storage gods so that the recovery will be completed. Legacy systems often do not have easily testable restores, leading to the "Schrödinger's back-up" problem: until data is lost, no-one really knows if it can be recovered.

Often the skills required to manage disaster recover and recovery of these legacy systems are lost, and the RPO and RTO become unknowns. In this case, the business is arbitraging the cost of embarrassing data loss and downtime with the relatively low cost of migrating the legacy machine to a shiny new virtual machine.

Perversely, in many environments the people championing retaining these old risky workloads are not the business management, but actual IT people trying to justify their lack of modern skill sets. As time goes on, the cost of migrating becomes greater as the skills required to migrate the obsolete tech become more rare.

As the risks become less understood, the true cost of operations increases until the box finally crashes.

Virtualisation changes the costs associated with management and agility for IT. Instead of needing KVMs or physical hands on a server to recover from a blue screen, even free hypervisors allow for quick out-of-band console access to troubleshoot issues.

Standardised virtual hardware abstractions mean that migrating from one generation of server to another requires just a few clicks, rather than a full rebuild and application-level migration. Storage array migrations that would have taken months now take hours or days. Abstracting the storage and management plane to the virtual level also frees us from vendor dependencies.

I can seamlessly move from one server, storage or network vendor to another without having to re-learn my day-to-day management tools and tasks. Testing of sketchy updates can be performed on clones of the virtual machine. This is important when I have an insanely flaky Tomcat server that carries a 20 per cent chance of successful update and an 80 per cent chance of blowing up and requiring a rebuild.

In the case of my recalcitrant Tomcat server, I can simply snapshot the virtual machine and make attempt after attempt until the upgrade goes through successfully. Easily cloneable cattle win out over pets quite clearly, here.

Virtualisation increases the time to "fail". Being able to "fail" quicker and cheaper means proof of concepts can happen quickly and keep the environment from growing old. The purpose of IT isn't to keep the lights on, it’s to enable the business to move quickly.

Legacy IT is a lethargic bottleneck on operations. There has to be a better reason for keeping dated IT around than preserving the jobs of the sacred priests maintaining the otherwise unknowable equipment.

Virtualisation is the first step. Automation and orchestration are the next. Self service for end users moves you into the cloud world and suddenly IT is delivering healthy services rather than tending to the needs and foibles of toxic legacy systems. ®

Similar topics

Other stories you might like

  • SpaceX Starlink sat streaks now present in nearly a fifth of all astronomical images snapped by Caltech telescope

    Annoying, maybe – but totally ruining this science, maybe not

    SpaceX’s Starlink satellites appear in about a fifth of all images snapped by the Zwicky Transient Facility (ZTF), a camera attached to the Samuel Oschin Telescope in California, which is used by astronomers to study supernovae, gamma ray bursts, asteroids, and suchlike.

    A study led by Przemek Mróz, a former postdoctoral scholar at the California Institute of Technology (Caltech) and now a researcher at the University of Warsaw in Poland, analysed the current and future effects of Starlink satellites on the ZTF. The telescope and camera are housed at the Palomar Observatory, which is operated by Caltech.

    The team of astronomers found 5,301 streaks leftover from the moving satellites in images taken by the instrument between November 2019 and September 2021, according to their paper on the subject, published in the Astrophysical Journal Letters this week.

    Continue reading
  • AI tool finds hundreds of genes related to human motor neuron disease

    Breakthrough could lead to development of drugs to target illness

    A machine-learning algorithm has helped scientists find 690 human genes associated with a higher risk of developing motor neuron disease, according to research published in Cell this week.

    Neuronal cells in the central nervous system and brain break down and die in people with motor neuron disease, like amyotrophic lateral sclerosis (ALS) more commonly known as Lou Gehrig's disease, named after the baseball player who developed it. They lose control over their bodies, and as the disease progresses patients become completely paralyzed. There is currently no verified cure for ALS.

    Motor neuron disease typically affects people in old age and its causes are unknown. Johnathan Cooper-Knock, a clinical lecturer at the University of Sheffield in England and leader of Project MinE, an ambitious effort to perform whole genome sequencing of ALS, believes that understanding how genes affect cellular function could help scientists develop new drugs to treat the disease.

    Continue reading
  • Need to prioritize security bug patches? Don't forget to scan Twitter as well as use CVSS scores

    Exploit, vulnerability discussion online can offer useful signals

    Organizations looking to minimize exposure to exploitable software should scan Twitter for mentions of security bugs as well as use the Common Vulnerability Scoring System or CVSS, Kenna Security argues.

    Better still is prioritizing the repair of vulnerabilities for which exploit code is available, if that information is known.

    CVSS is a framework for rating the severity of software vulnerabilities (identified using CVE, or Common Vulnerability Enumeration, numbers), on a scale from 1 (least severe) to 10 (most severe). It's overseen by, a US-based, non-profit computer security organization.

    Continue reading

Biting the hand that feeds IT © 1998–2022