This article is more than 1 year old
Go Virtual to become Agile
Getting just what you need from a virtual pot
The bigger the IT user, the more likely they are to suffer the slings and arrows of outrageous server under-utilisation. Most of the major research companies have, at some time or another, studied the use of server resources in a production environment and found it low. A typical server – you know the type of beast: dual Xeons, a Gigabyte or two of memory and a reasonable RAID array – is usually only doing something productive for around 20 percent of its life. The rest of the time, it sits there, idling away the hours, doing nothing.
This is a by-product of two factors: the way that servers have developed as stand-alone entities, and the buying patterns of users; if users have needed more resources, they have purchased more servers rather than exploit the resources already standing around doing nothing. To be fair to users, finding ways of exploiting those existing resources has been no easy task as no clear-cut technological solution to the problem has existed until now. That solution is virtualisation - the ability to "build" virtual servers as and when they are required out of an existing set of IT resources.
Virtualization II
The technology is based on the ability to partition a computer so that it can run more than one task. The basics are in fact old technology from the mainframe era, but they have now been applied far more widely. The recent development by Intel of Virtual Technology, which builds-in the ability to partition individual processors, now means that the concepts of virtualisation can be taken down to the level of the individual PC. In this case, it will be possible to partition each processor so that it runs multiple environments – a Windows application in one and a Linux application in another, for example. Each will run independently of the other, and any problems with one will not crash the other. Processors with VT technology will be available during 2006.
It is at the server level that the most important benefits of virtualisation will be found, however. Those benefits are relatively simple to state: greater use of available resources, which in turn will mean a reduction in (or a greater return on) server investments over the long haul; greater flexibility and operational agility through a much greater chance that applications can be run when they are needed without waiting for additional resources to be purchased; and better system management and lower long term running costs through the centralisation of more powerful server resources.
The words "more powerful server resources" will be for many users (and the larger the user the more likely is this to be the case) an implied threat of more need for investment, and in the short term that may be true. In the long haul, however, virtualization has the potential to slow the rate of investment and generate a better return. This is because the best approach to virtualisation is to consolidate the servers – effectively replacing a plethora of dispersed, individual servers with more centralised datacentres built around racks of standard servers that lie at the heart of the corporate network infrastructure.
Datacentres give users the flexibility to change, adapt or grow their business processes in close to real time, in order to meet changing business needs. They can, subject to the provisions of an application’s licence requirements, install and run an application size slice of server resource needed, at the time it is needed. That same hardware resource can then be used to run a different application once that specific task is completed. Most important of all, should a task become a high priority requiring the commitment of significant resources – a classic topical example being the workload generated in processing and fulfilling orders generated by a Christmas marketing campaign – those resources can be made available without purchasing yet more servers.