DevOps isn't just about the new: It's about cleaning up the old, too

Paying off your 'technical debts'

As one of my coworkers used to say when confronted with The Latest Development Improvement Methodology: “Why don’t you come down here and chum this stuff?” – except he used the language of a sailor.

In trying to implement the latest breakfast cereal agenda, DevOps, one of the primary chumming tasks is dealing with all your “pre-DevOps” software and services.

We call this “legacy” and it’s more or less the result of too much unaddressed “technical debt.”

The techniques for dealing with legacy never leave you feeling good: just like eating a box of cereal, over the kitchen sink, all the way down to the green leprechaun dust. But, there are some pragmatic ways of making sure legacy doesn’t totally wreck your DevOps efforts to create more resilient, more productive software.

Identifying legacy

First, if you’re starting from scratch, with no existing software, with the crisp scent of Expo white board markers still lingering in the air, you have no legacy problems. Enjoy your tasks of creating legacy code for the future you! However, in most large organizations, you’ll have plenty of legacy code and systems.

I use two tests to identify legacy code:

  1. It’s running the current business. The code is keeping the lights on, running however many decades of existing business process has gotten your company to where it is. For example, people often (factually) joke that the IRS is still running systems from the Kennedy era.
  2. You’re afraid to change it, mostly likely because it is poorly understood and has poor test coverage. This is compounded by Michael Feather’s legacy code dilemma: to add unit tests, you must change the code. To change the code, you need unit tests to show how safe your change was.

For those lucky few who don’t need to evolve their software (and their cursed users), dealing with legacy code isn’t an issue. But the rest of us need techniques to manage the risk of working with legacy code.

Quarantine the slow movers

As in dealing with any pack of zombies, the first thing you want to do is identify and then isolate as many of your legacy applications as possible so that you can ignore them, freeing up time to focus on the feisty ones. In enterprise architecture management, this means doing some basic portfolio analysis. And, sure, I bet you have whole teams of people who do this already... right?

They know all the applications you’re running, the amount of money they bring in (“business value”), their expected life-span and end-of-life plans, have identified key stakeholders and developers who know not only the software but the business it supports forward-and-backward. Yup, we all have that functioning at 110 per cent ‘cause we’re “enterprise”! Figure out which of the 1,000’s of applications you have are low value and not worth spending time on. The second wave of quarantining is to find applications that haven’t been fully virtualized yet. With minimal changes, you can squeeze some resource savings (time and money) out of applications by virtualizing them.

After this, you’re left with smaller set of applications that you care about. To some extent, you’re admitting defeat with these quarantined applications. On the other hand, you now have plenty of work for all those change resistant folks you have who aren’t feeling the DevOps breakfast cereal vibe, if that’s a concern of yours. Now, that you’ve cleared out some underbrush, what do you do with the trees that are left over?

Fork-lifting, strangling, and re-writing

The most common methods I see for dealing with the leftover legacy applications are to either attempt to move them to your new platforms and methodologies, introduce an API facade in front of them and slowly let them rot out as new code builds up behind the facade, or to start re-writing them.

“Fork-lifting” the application into a full on DevOps-driven, continuous delivery approach can work if the application was written to be, generally, self-contained and didn’t depend on vendor-proprietary services or things like network file shares.

These are usually simple applications, and you’re usually not lucky enough to have them live through the initial quarantine filter. Also known as the “lift-and-shift” approach and, as Forrester’s John Rymer points out, this approach looks the easiest but has the worst long-term payoff. This is because simply changing how you manage the lifecycle of the application without changing the application itself can limit the benefits of a DevOps-driven approach, namely, the ability to quickly add new features while maintaining a high level of availability in production.

In those instances where your new applications must use legacy software and services, you can use the “strangler pattern” to lessen the annoyance of legacy. While you may wish this pattern was named after the psychopath, it’s named after the plant that slowly takes over trees.

The first step is to introduce a new layer of abstraction – an API or set thereof – that fronts the legacy services. Instead of calling back to that big database or ERP system directly, you call to your own facade on-top of it. That part is easy enough, and standard, the hard part is planning for the eventual rot-out of the old system. Judiciously, you start replacing capabilities in the legacy system with new code that’s more aligned with your new approach to software development, using some mild routing intelligence behind the facade to figure out when to call the legacy code versus the new code. Eventually, as with the strangler vine, only new growth is left.

Finally, you often have to bite the bullet and just re-write it. While this is the most time intensive and if done slapdash, risk-laden choice, if done properly it gets you the frequent change benefits of continuous delivery driven by a DevOps approach to process.

With legacy code, there are no easy outs, or secrets. The most important thing is to be aware of that and not be bamboozled by people who are happy to sell you a perfect solution to your legacy “problems.”

Often, the right answer is to carefully do nothing and instead to focus on your net-new software without letting your legacy software and processes drag you down.

This way of ensuring that neither the old or new approaches to software rocks the boat for the other is more of how I think of “two mode IT”: decoupling those two parts of your portfolio so that they can independently evolve without negatively affecting the other. ®

Similar topics

Narrower topics

Other stories you might like

  • World’s smallest remote-controlled robots are smaller than a flea
    So small, you can't feel it crawl

    Video Robot boffins have revealed they've created a half-millimeter wide remote-controlled walking robot that resembles a crab, and hope it will one day perform tasks in tiny crevices.

    In a paper published in the journal Science Robotics , the boffins said they had in mind applications like minimally invasive surgery or manipulation of cells or tissue in biological research.

    With a round tick-like body and 10 protruding legs, the smaller-than-a-flea robot crab can bend, twist, crawl, walk, turn and even jump. The machines can move at an average speed of half their body length per second - a huge challenge at such a small scale, said the boffins.

    Continue reading
  • IBM-powered Mayflower robo-ship once again tries to cross Atlantic
    Whaddayaknow? It's made it more than halfway to America

    The autonomous Mayflower ship is making another attempt at a transatlantic journey from the UK to the US, after engineers hauled the vessel to port and fixed a technical glitch. 

    Built by ProMare, a non-profit organization focused on marine research, and IBM, the Mayflower set sail on April 28, beginning its over 3,000-mile voyage across the Atlantic Ocean. But after less than two weeks, the crewless ship broke down and was brought back to port in Horta in the Azores, 850 miles off the coast of Portugal, for engineers to inspect.

    With no humans onboard, the Mayflower Autonomous Ship (MAS) can only rely on its numerous cameras, sensors, equipment controllers, and various bits of hardware running machine-learning algorithms to survive. The computer-vision software helps it navigate through choppy waters and avoid objects that may be in its path.

    Continue reading
  • Revealed: The semi-secret list of techs Beijing really really wishes it didn't have to import
    I think we can all agree that China is not alone in wishing it had an alternative to Microsoft Windows

    China has identified "chokepoints" that leave it dependent on foreign countries for key technologies, and the US-based Center for Security and Emerging Technology (CSET) claims to have translated and published key document that name the technologies about which Beijing is most worried.

    CSET considered 35 articles published in Science and Technology Daily from April until July 2018. Each story detailed a different “chokepoint” or tech import dependency that China faces. The pieces are complete with insights from Chinese academics, industry insiders and other experts.

    CSET said the items, which offer a rare admission of economic and technological vulnerability , have hitherto “largely unnoticed in the non-Chinese speaking world.”

    Continue reading

Biting the hand that feeds IT © 1998–2022