"In our country," Alice told the Red Queen in Through the Looking-Glass, "you'd generally get to somewhere else – if you run very fast for a long time, as we've been doing."
"A slow sort of country!" the Queen replies. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"
What does this have to do with you? A lot, if you've been transforming and updating your corporate IT systems only for them to be just as brittle, sclerotic and out of date as when you started.
It seems, like Alice, you're used to a slow sort of country. In the context of Enterprise IT systems why is it that we seem to be stuck in a Red Queen's race with no possibility of rest, stability or improvement in delivery?
In any economic setting there will inevitably be a limited budget to update and improve systems, and we need to understand two forces that impact on the budget. Firstly, what drives the size of the budget, and secondly how will the budget be spent. Budget size is important (no sniggering at the back), any system starved of resources will inevitably expire, so how are IT transformation budgets set?
Given the primary corporate view of IT, if they bother to view it at all, as a drag on the organisation, the standard consequence is to minimise any IT spending.
While the IT department may have a view on just how well the organisation can function without it, companies in general have a very poor understanding of their degree and scale of reliance on IT. Equally, functioning IT doesn't provide much evidence of need – systems work, data gets processed, outages are rare – so why do you need any money for maintenance?
With a limited budget for maintenance or improvement, how will it be allocated to the various systems managed by IT? Remember that in having to justify the spend at all, the primary need is to demonstrate business impact and – equally – guarantee that there is no risk to continued operations.
Inevitably, depending on organisational perspective, there are essentially three underlying approaches. First, maximise the number of systems that have been updated: demonstrate lots of work has taken place. Next, minimise the risk that any update will fail: have no impact on the ongoing organisation. Finally, maximise the apparent impact on the direct customers for IT systems – improve the immediate return to the business.
If the organisation maximises the number of systems updated then the clear imperative is to choose systems that are easy (cheap) to update. The systems that are cheap to update are invariably the ones with the least difference between in-use and current. In other words, the systems that were updated during the last round of updates. So the organisation will choose to improve those systems just beyond some minimum obsolescence criteria and until all of the budget is spent.
Minimising risk requires that the gap between in-use and updated is as small as possible so that the risk impact of the change on other systems can readily be estimated. If the organisation desires a small low-risk update gap then again they will choose systems that are just beyond a minimal obsolescence criteria and then improve these systems until the budget is exhausted.
Finally, maximising customer impact requires that the systems that present information to final users or customers are updated. With the separation between presentation, logic and data layers within corporate IT systems, this means that only the presentation, and potentially some aspects of the logic layers, will be updated. Which is great as these are the layers that are most likely to have been updated recently anyway under either of the other approaches to update selection.
In essence, all of the above leads to the same conclusion: with a minimised budget and being risk averse, update those systems that are easy to update, the systems that have been most recently updated. Indeed, this approach now has both a name and an advocate, to-whit Mode 2, which at its heart says: "Don't touch the hard stuff if it's too hard, just update the wrappers."
An interesting question is what are the long-term consequences of these economically rational approaches to IT systems updating? Remember, that in an activity with a generational time of approximately 24 months, "long-term" is going to be anything longer than six years. After repeated applications of these (equivalent) approaches we would expect to observe a "speciation event" in our systems.
Extremes of age: old and young servers survive in a bi-modal world, those inbetween get chopped
The systems will split into two groups: a bright, shiny collection of bang-up-to-date systems that get the upgrade spend every cycle, and a collection of older systems that are too costly, risky or – frankly – boring to bother updating, so they are not. The long-term consequence is that the old systems are wrapped with ever-increasing layers of new systems to try to keep the apparent functions current, but – in reality – these layers simply add to the "drag" on any attempt to become "agile".
From my perspective, this view of the state of enterprise-scale IT explained a particularly interesting problem. Why do large-scale transformation projects stall after 30-40 per cent of the work has been done? And the answer is that the proportion is roughly the number of up-to-date systems most organisations maintain, and they are the only ones relatively straightforward to transform.
Is there a way out of the Red Queen's race?
How can we run twice as fast? Key to this is establishing the true risk and exposure organisations have to legacy IT, in other words gaining a better understanding of the degree of technical debt currently held by the organisation. As boring as it sounds, if organisations had to carry technical debt on their books – just like they carry the value of their brand on their assets – then, finally, they might understand both their exposure and necessary spend on their critical IT assets. ®