How do you manage service levels in a virtualised environment?

Business as usual or whole new ball game?


Lab In previous research projects we’ve examined the impact of service level monitoring and management, and found positive benefits in having a range of associated SLAs in place. No surprises there of course.

However, the main finding was that beyond a certain number, it didn’t matter how much ‘extra agreement’ you have: there was a diminishing return in terms of positive perception of IT from the business’ point of view. In other words, the harder you try to deliver, the harder things get.

It’s going to be interesting to see what will be the impact of virtualisation on this very real law of diminishing returns. In theory, a major benefit of server virtualisation is to enable an improvement in service levels above and beyond what can be achieved without it. Now that many organisations are taking their first steps into ‘mainstream’ virtualisation (i.e., beyond pilots and small scale initiatives), we’re starting to find out just how the relationship between virtualisation and service level management plays out.

What changes in thinking does virtualisation bring? There are a couple of major differences from IT’s point of view, notably around server provisioning: IT becomes able (in principle) to effect changes almost instantly compared to the pre-virtualisation era, when timelines for responding to new requirements were measured in weeks, not hours or minutes.

However, another factor is that despite it being possible, and indeed desirable, to manage a virtual environment without any reference to the physical world, adopters are discovering just how important it is to understand the virtual-physical divide – because there are still physical servers involved despite application logic being executed in virtual machines, and not least, because users still see ‘their’ applications as discrete entities, regardless of whether or not they exist in a virtual environment.

However, are things really so different in practice with virtualisation in the mix when it comes to managing service levels?

There are, of course, a bunch of things that could make a difference.

A potential biggie is the architecture in play itself. We’re used to employing certain configurations to deliver pre-designated levels of scalability, performance and security in the physical world. Load balancing across multiple servers for example, or “2N+1” failover models, or database clustering, or defense in depth – all of these models rely on physical server configurations which don’t have a direct virtual equivalent. It doesn’t necessarily make sense to load-balance across multiple virtual servers if they are all going to be running on the same physical server, for example.

The relationship between provisioning, procurement and service management may also be affected. In old money, it took a while to get new equipment in place – and corporate consumers of IT have been brought up on the principle of lead time. Even if equipment was available, it would still take days or weeks (or even months) to configure and deploy.

Virtualisation does indeed make such things much simpler – but the knock-on effect could so easily be that such best practices as asset management, configuration management and license management get lost along the way. Indeed, it remains to be seen whether the current ‘gold standards’ of IT management best practice – ITIL and COBIT – will cut it in the virtual world.

This brings us to the manner in which we can or could operate an environment containing a blend of physical and virtual domains. The monitoring, management and reporting activities which worked perfectly well in a more static, physical environment may simply not cut it as virtualisation becomes more widespread across the IT infrastructure. The question is, what should you do about it – or for those of you that have been there and done it, what did you do about it?

Something we’re very interested to hear about is how the emphases you place on these areas vary depending on what sort of company you are and how big your IT environment is. For example, if you are part of a fully resourced IT shop in a larger organisation you may have the luxury of being able to ‘over-provision’ your virtual environment in order to make sure that ‘the pool’ can withstand the demands placed on it.

Alternatively, someone working in a smaller IT shop with little or no margin may see virtualisation as way of squeezing every last drop of goodness from the IT resources they have, or conversely, see the additional micro management that may be required as an overhead they could do without. Whatever your situation, we’d love to hear. ®


Other stories you might like

  • It's primed and full of fuel, the James Webb Space Telescope is ready to be packed up prior to launch

    Fingers crossed the telescope will finally take to space on 22 December

    Engineers have finished pumping the James Webb Space Telescope with fuel, and are now preparing to carefully place the folded instrument inside the top of a rocket, expected to blast off later this month.

    “Propellant tanks were filled separately with 79.5 [liters] of dinitrogen tetroxide oxidiser and 159 [liters of] hydrazine,” the European Space Agency confirmed on Monday. “Oxidiser improves the burn efficiency of the hydrazine fuel.” The fuelling process took ten days and finished on 3 December.

    All eyes are on the JWST as it enters the last leg of its journey to space; astronomers have been waiting for this moment since development for the world’s largest space telescope began in 1996.

    Continue reading
  • China to upgrade mainstream RISC-V chips every six months

    Home-baked silicon is the way forward

    China is gut punching Moore's Law and the roughly one-year cadence for major chip releases adopted by the Intel, AMD, Nvidia and others.

    The government-backed Chinese Academy of Sciences, which is developing open-source RISC-V performance processor, says it will release major design upgrades every six months. CAS is hoping that the accelerated release of chip designs will build up momentum and support for its open-source project.

    RISC-V is based on an open-source instruction architecture, and is royalty free, meaning companies can adopt designs without paying licensing fees.

    Continue reading
  • The SEC is investigating whistleblower claims that Tesla was reckless as its solar panels go up in smoke

    Tens of thousands of homeowners and hundreds of businesses were at risk, lawsuit claims

    The Securities and Exchange Commission has launched an investigation into whether Tesla failed to tell investors and customers about the fire risks of its faulty solar panels.

    Whistleblower and ex-employee, Steven Henkes, accused the company of flouting safety issues in a complaint with the SEC in 2019. He filed a freedom of information request to regulators and asked to see records relating to the case in September, earlier this year. An SEC official declined to hand over documents, and confirmed its probe into the company is still in progress.

    “We have confirmed with Division of Enforcement staff that the investigation from which you seek records is still active and ongoing," a letter from the SEC said in a reply to Henkes’ request, according to Reuters. Active SEC complaints and investigations are typically confidential. “The SEC does not comment on the existence or nonexistence of a possible investigation,” a spokesperson from the regulatory agency told The Register.

    Continue reading

Biting the hand that feeds IT © 1998–2021