This article is more than 1 year old
It's completely unsupportable. Yes, we mean your brand new system
The problem started when those ridiculous users ... oh, hang on. It started in the IT department
Feature The concept of "shadow IT" is a familiar one. One of my favourite descriptions of it comes from security vendor Forcepoint, which says shadow IT is "the use of information technology systems, devices, software, applications, and services without explicit IT department approval."
It has grown exponentially in recent years with the adoption of cloud-based applications and services.
The majority of organisations — particularly the IT and security teams — are conscious of the potential threats from shadow IT and are on the lookout for it so it can be stamped on. Yet many such organisations are, in parallel, running activities whose outcomes can present just as big a problem as shadow IT.
Worse, it is often the IT departments themselves building the problem.
- Some Things just aren't meant to be (on Internet of Things networks). But we can work around that
- You're the IT worker in charge of securing the cloud for your company. Welcome to Hell
- Forget 'shadow IT' – it's 'self-starting IT' now
- Customer data security is our highest priori- ha ha ha whatever, suckers
Let us take a real-life example. An innovative company devised a novel new concept — in this case an internet-facing service that interfaced to various core systems at the back end. The architecture called for ten or so servers — load balancers, web-facing servers and back-end servers, all doubled up in the interests of resilience. There was no particular reliance on a specific platform or application — that is, there was no compulsion to run a Windows infrastructure because there was no reliance on (say) SQL Server or Active Directory, and there was nothing special about the virtual server infrastructure, hence the designers could be operating system agnostic.
A few weeks later, the tech was running wonderfully on one of the popular Linux distributions. It worked well, it performed wonderfully, and customer take-up was good.
It makes perfect sense to pick the right tool for a job, of course — whether that’s a particular operating system in the case of our example, or some other novel tool that one discovers or which emerges onto the market. But only if you innovate within the organisation’s capacity. Let us unpick what happened in our example.
Bringing in the users
First, authentication. Active Directory (AD) is ubiquitous as a directory service. Although it is relatively straightforward these days to interface non-Windows operating systems such as Linux into an AD setup, in this case that wasn’t done — partly because it was perceived by the designers that the new setup would be largely independent of the rest of the company network. Such an approach has pros and cons, the key part of the latter being that it sat outside the rigorously managed directory service and complicated the task of on- and off-boarding users.
Next, patching. Most of us have a regime of regular Windows patching, implemented each “Patch Tuesday” either by our in-house IT teams (if we have one) or our IT service providers. How many of us include the non-Windows devices in that regime, though — particularly if we have only one or two examples of a particular technology?
Do we check each month to see if there are updates to our SAN controllers’ firmware, our LAN switches’ operating software, our printers’ firmware, or the Linux distributions we’re using on a handful of servers?
Answer: very few of us are particularly rigorous in patching the non-mainstream systems we run — and of course, if we have innovated and introduced a new technology, that inevitably sits outside our “mainstream” patching procedures. Furthermore, if we have built something unique then no patches will even exist until we write them.
The absolute whammy, though, is supportability.
If we are innovating, then by definition we are producing something that is harder to support than something that has existed for years. The more innovation there is, the fewer people there are with the knowledge of how to support part or all of it. If we decide to produce something new and innovative, then fine. But if we then decide to introduce innovation at multiple levels, we are creating a support nightmare for ourselves. If we are producing a system whose program code does something innovative, why would we choose to base it on (for example) an operating system we have not previously used, or a database back end that is different from the ones we already have? Unless there is a compelling reason to do so, why give ourselves the problem?
Excessive innovations and one-off implementations
In 1999 I was CTO of an internet startup. I spent a week in a Massachusetts data centre building a collection of Windows NT 4.0 servers on Compaq ProLiant kit while my development team toiled away at the software development task back in London. Aside from a couple of hardware gremlins that were easily rectified, the build was very straightforward. In the cage next to me was a poor guy who was trying — and largely failing — to do pretty much the same as me, but with a bleeding edge setup of Windows NT 4.0 on DEC Alpha hardware.
When I departed at the end of the week, the technology battle was still raging and my beleaguered neighbour was, metaphorically speaking, bruised, bloodied and running very low on ammunition. If it was that hard to build, how hard would it have been to support? And like my employer, the real innovation of his company was in the software that would sit on the platform, not the platform itself, so his wounds were self-inflicted (or, more likely, inflicted by other techies in his organisation). Excessive innovation led to a one-off implementation nightmare and a supportability problem for the lifetime of the system.
For an example of how to be innovative yet remain in control, let us take a look outside the IT industry for a moment. If you want the ultimate automotive status symbol — something fast, mind-numbingly gorgeous and unique — then you may well land on Kahn’s website. But Kahn doesn’t make cars: instead, it takes vehicles from other manufacturers such as BMW and Aston Martin and customises them lavishly. BMW and Aston Martin make amazing chassis and engines, so Kahn uses these supportable platforms and innovates on top of them. The firm wouldn’t have capacity to build all — or almost all — of a car and deliver the products it does, so innovation is done where it needs to be, and deliberately not done on elements where a choice not to innovate does not detract from the end product. Or, in many cases, where the existing manufacturer has done a better job than a small customisation company could ever hope to do.
If we are innovating, then by definition we are producing something that is harder to support than something that has existed for years
We must take a similar approach in IT. Innovation is critical to many businesses’ survival, and we must continue to do it in order to remain competitive. We must, however, be sure to consider the entire lifespan of what we are building in order that it remains supportable until such time as it is decommissioned.
So, if we introduce a new operating system, or type of hardware, or back-end database platform, or for that matter anything we have not used before, it is absolutely essential to make sure its maintenance is added to the existing support regime. We must also ensure, before we build it, that we have the people — or at least sufficient access to the people — whom we need to support it. Our ten-server example from earlier could have been relatively simple to support — but in reality, several separate instances were built, and a handful of servers multiplied a few times to give several dozen, thus magnifying the patching and support task by an order of magnitude.
And remember: even if we choose not to innovate in a particular area, this does not mean we will not have to change it for something new at some point.
If we buy, say, a Cisco 5516-X firewall — a current product in the vendor’s catalogue — to protect the new service we are building, and we expect that service to exist for more than five years or so, we will need to replace the firewall before its end-of-service date in August 2026.
Similarly, if we run our new service on Windows Server 2019, we need to be aware of the January 2024 date for the end of mainstream support and the more distant (but not really all that far away, in reality) January 2029 extended support end date.
Even the parts of our innovative new system that are not particularly innovative, then, will generally still need us to evolve them during the lifetime of the product. And that’s fine — we are much more likely to have a rigorous patching, testing, maintenance and upgrade regime for the core systems and technologies we already run and know how to work with.
But where we are doing something new, or introducing a technology that we have not previously used, we are opening a can of worms. The only acceptable way to approach this is to take a thorough, critical look at the implications of doing so.
How do we deal with access management? How will we back it up? Can we fix issues or replace broken components within the Service Level Agreement we have with the users? Do we have, or can we acquire in a timely fashion, the skills necessary to run, support, patch and periodically upgrade it?
Are we confident that the technology has the longevity we need? If the new tech is itself new, how sure can we be that it is performant and can be configured securely? These are all questions we can answer confidently with technologies we already employ, but which we need to consider much more carefully with new tech.
Innovation is immensely positive, then. But do it in isolation and all you will achieve is an unsupportable white elephant. And even worse, you will have to explain to the executives why the IT department just spent a chunk of its budget implementing its own take on shadow IT. ®