This article is more than 1 year old

Potholes on the road to server virtualisation

And how to avoid them

Lab While the broader question of what’s going to prevent server virtualisation going mainstream might be interesting, it's far more pertinent to find out what’s going to prevent virtualisation working in your own organisation.

According to your feedback, virtualisation itself doesn’t seem to require that great a skill set to get going. The challenge comes when you try to move it up a level. So, what have we learned, and what can you do about it?

Modern server hardware has built-in feature sets that take virtualisation into account, but the physical capabilities of the underlying hardware and the configuration of the virtual machine can both act as bottlenecks if they are under-specified. We’ve written about the bottlenecks in physical devices and how multiple virtual machines can be contending for the same resources, either on the server or in the network.

We’ve also received ample feedback on the importance of allocating sufficient RAM to each VM. Some workloads may never be suitable for virtualisation – though these might be fewer and further between as time passes. It’s also likely that we’ll see more applications written with virtualisation in mind.

Meanwhile, there’s also the physical/virtual divide. Some evangelists like to give the impression that the virtual world exists entirely on its own, with the hypervisor/management layer taking the strain of physical management without a need for intervention. But while management tools exist, and sure they can help, they don’t necessarily come cheap.

Some readers have built their own monitoring and provisioning tools - which is fine if you have the luxury of time and the skills to do so.

As an additional complication, not only are management features sometimes duplicated between the virtual and physical worlds, but sometimes they can actually act against each other – for example in database clustering, or the high-availability features of email servers such as Exchange. So it's important to understand what capabilities exist on either side, and make the best use of each.

A number of challenges exist beyond the technical, in areas such as software licensing and vendor support. Historically, applications were not sold with virtualisation in mind and vendors are still working out business models that make sense to all sides. No doubt they will in time.

With all these challenges, the question is how to get there? There does seem to be plenty of commentary from the Reg readership that suggests this is a journey worth embarking on, but some steps are harder than others. Organisations getting unstuck appear to be the ones that have made insufficient preparation, or who didn’t really consider where virtualisation would take them beyond straightforward consolidation.

Thinking ahead does appear to be a prerequisite. It requires technical and non-technical skills and knowledge that may currently be at a premium, but the danger would appear to be making the assumption that "everything will be alright". Thinking ahead also means you can cost things up correctly – you don’t want to hit the wall to find that you need to go cap in hand back to the finance director for more cash.

For now, perhaps the biggest distraction is the suggestion that virtualisation is a stepping stone to something even bigger. Don’t be waylaid by this. Getting the technology platform right, and delivering on it, is far more important than aiming for the stars just yet. ®

More about

TIP US OFF

Send us news


Other stories you might like