This article is more than 1 year old

The wonderful madness of metrics: Different things to different folk

Or, how I learned to stop worrying and verify

All the above could easily remove one or even two digits quite easily. By the time all the above is taken into account it should be obvious to anyone that when a vendor says 99.999 per cent uptime it comes with caveats that are so big you could drive a fully loaded transit van through them.

Carrying on with our latest wheeze about cooking the virtual books, uptime means different things to different people. Whilst a hosting company may well advertise five nines availability of the website, it doesn’t always mean the same thing to the same people.

Cooking the virtual books is not limited to vendors, but is rampant within end user organisations, organisations such as yours. We are all aware of the SLA’s that we have to achieve, but frequently that four-hour response means just that.

A response is not equal to a fix. I have had servers on four-hour response contracts that actually took in excess of twenty-four hours to just get the parts when they had to be flown in from Europe.

Ok, so the services were down and losing money, but the hardware vendor in question had met its obligation as the engineer had responded to the request within the required time frame.

Internal customers are often in the same basket. If there is a hold up in implementation due to reason X or Y, the ticket can be pended ad infinitum and still be within SLA as it was “outside the norms.”

Such a play on figures is rampant in just about every organisation with a service level agreement. Web services have evolved well beyond the old static HTML code but some of the less-advanced hosts may use what are considered very rudimentary tests, such as pings, test file downloads or similar.

All that does is prove that the website portion is up and running. It doesn’t necessarily mean that all the logic and database services are up and working as expected. The problematic bit is that it is more than likely a standard off-the-shelf contract goes into this level of detail, unless we are talking about a big hosting company or large (and well paying) clients.

This is the part at which I call for some sanity around metrics and for vendors to be more transparent about how they calculate their metrics. Any vendor who wishes to put themselves beyond reproach can do so by being up front about the metrics and methodologies used.

Some of the more switched-on vendors also use independent third-party test houses to ensure that there is no perception or room for accusation of less than above board practice. Without doubt, such services cost good money but it provides a seller with a proven providence. Combined with this, common metrics and test criteria would provide better clarity.

At the end of the day, end users have become accustomed to questionable metrics and the industry itself has to start to be seen to be cleaning up its own act.

As to if it will ever be fixed, I have my doubts, but stranger things have happened. If, and until then, any customer who doesn’t do due diligence and takes any vendors’ word without doing some fact checking, will have only themselves to blame. ®

More about

TIP US OFF

Send us news


Other stories you might like