This article is more than 1 year old
Don't get 2e2'd: How to survive when your IT supplier goes titsup
Why you should always see it coming
Analysis I used to know a finance director who had a favourite mantra: “Minimise fixed costs.”
The concept's a simple one: by all means use permanent staff to deal with the aspects of your business that don't change much, but where your revenue streams go up and down, think of ways of allowing the cost of servicing those revenue streams to vary in unison with the ebbs and flows.
Outsourcing is an obvious way to look, and companies all over the world are doing it. Yet in the last couple of weeks one of the UK's major service providers, 2e2, has gone spectacularly pear-shaped.
Customers have been sent into a major panic over retrieving and relocating their data and services, and difficulties in cash-flow have prompted the administrators to place a letter on 2e2's website asking data centre users to contribute a sum between £4,000 and £40,000 to keep them running.
Although big news today, this is nothing unique and the trick to making sure your trousers don't become fast friends with your ankles is all in the preparation.
Back in the dot-com era, for instance, a cluster of startups all used a particular London-based web development house for their implementation and hosting services. This development house in turn used one of the big London data centre companies for its hosting services.
Everything became more than a little confused when the company in the middle went out of business: the hosting provider's bills stopped getting paid, and so the hoster cut off the services and refused access until someone paid the outstanding balance. Thankfully, due partly to luck but largely to diligent record-keeping, it was possible to prove to the hosting provider that the equipment belonged not to the defaulting service provider but to the end client. So they scooted in, claimed the kit, and installed it elsewhere.
Our kit! Oh God, our kiiiiiit...
Outsourcing doesn't just mean hosting, though: what about when you decide to lease kit instead of buying it? Atlantic Computers was an IT leasing company that blew up in the late 1980s. This was back in the day when IT was properly expensive, and a company I worked with leased its IBM System/38 (remember them?) from Atlantic.
Everything happened too quickly, and the upstream owner of the kit decided to give notice that it was going to come and repossess the equipment – which, thanks to the fact that it ran our entire business's enterprise resource planning system, would have meant utter disaster. Salvation came in the lateral thinking of one of the senior managers, who simply told the owners: “You can have it, but as it contains classified defence material we'll have to destroy it before it's removed from the premises”. Not too surprisingly, a far more calm process of negotiation followed and the equipment remained.
Next, consider one of the services we all get someone else to do instead of doing it ourselves: telephony and data circuits. Not many of us run our own fibre from centre to centre, deciding instead (quite sensibly) to rent services from companies that already have thousands of miles of fibre under the street. But what happens when the telco lets you down?
Consider the case of one UK SME that relied on its internet and voice lines to keep the call centre running. One day, of course, everything stopped working and they gazed with awe upon the facial expression of the JCB driver in the street outside as the latter realised what he'd just done. Again the story is only partly sad: thankfully one of the lines ran the alarm system and had a priority-fix SLA on it, so the engineer that was soon on site was plied with tea and biscuits and persuaded to re-splice all the lines, not just the one he was obliged to do.
By failing to prepare, you are preparing to fail
The thing is, though, there is seldom an excuse for falling victim to a service provider getting it wrong or going out of business. Occasionally I'd say it's forgivable: the demise of Atlantic, was, for instance, quite hard to predict and its clients couldn't necessarily expect to see that one coming.
A UK SME relied on its internet and voice lines to keep the call centre running. One day, of course, everything stopped working and they gazed with awe upon the facial expression of the JCB driver in the street outside as the latter realised what he'd just done.
The 2e2 example is, however, just daft. Do these clients not have lawyers who go through the contracts asking: “What if”? And have they not said to themselves: “Our data is critical, so what happens if we lose an entire data centre”? If they've agonised over having a secondary data centre and decided they can't afford it, they're entitled to a little sympathy. If they've not considered it, though, the same isn't true.
I've had complex telecoms contracts in the past, for instance, and it's always seemed sensible to understand the entire context of the connection. Take a leased-line internet connection to an office in North London, for instance; our supplier was COLT but the last kilometre or so was provided by BT as it was off-net for COLT.
The exercise was one of risk assessment and risk acceptance: because of the need for a different upstream provider we had to accept a degraded SLA to cover the fact that the call-out time for a fault was the response time in the SLA between us and COLT plus the response time in the SLA between them and BT.
Helpfully the consideration of the stability of either company was an easy one, as both BT and COLT were going strongly at the time. For that type of service, with the particular usage patterns and mission-criticality (not much) in our setup, it was fine; in other situations it wouldn't be and we'd have considered resilient links, multiple providers and the like.
I live and work in the Channel Islands, which makes life interesting with regard to service provision. We have three hosting providers on Jersey, so consideration of single points of failure is always fun. Say you're starting from nothing and you want a resilient data centre setup. You could go to one provider that has two data centres in different parts of the island, and benefit from the low cost of connecting the two (they're both connected to that provider's resilient metro network, after all). Or you could decide to go with two providers, which reduces the risk should one provider go under, but the interconnects will be more complex and expensive as you're going partially off-net. Or you could say that having two data centres on the same island is in itself too risky, not least because the power provision into the island as a whole resembles a bit of damp string and some hamsters in a wheel, and look to Guernsey or the mainland instead.
There's no right answer in the general sense – it's very much horses for courses – but you have to have these debates with yourself and justify the end decision.
Suppliers go under and when they do sink, your business can suffer. And if and when you sign up for a service and you don't give due consideration to the eventualities of one of your suppliers failing you, you're wasting both your money and time, and, potentially, your business. ®
Dave is a senior network and telecoms specialist who has spent 20 years working in academia, defence, publishing and intellectual property. Founding technical editor of Network Week and Techworld, Dave’s specialisms include design, construction and management of global telecoms networks, infrastructure and software architecture, development and testing, database design, implementation and optimization. Dave and his family live in St Helier on the island paradise of Jersey.