Potholes on the road to server virtualisation

And how to avoid them


Lab While the broader question of what’s going to prevent server virtualisation going mainstream might be interesting, it's far more pertinent to find out what’s going to prevent virtualisation working in your own organisation.

According to your feedback, virtualisation itself doesn’t seem to require that great a skill set to get going. The challenge comes when you try to move it up a level. So, what have we learned, and what can you do about it?

Modern server hardware has built-in feature sets that take virtualisation into account, but the physical capabilities of the underlying hardware and the configuration of the virtual machine can both act as bottlenecks if they are under-specified. We’ve written about the bottlenecks in physical devices and how multiple virtual machines can be contending for the same resources, either on the server or in the network.

We’ve also received ample feedback on the importance of allocating sufficient RAM to each VM. Some workloads may never be suitable for virtualisation – though these might be fewer and further between as time passes. It’s also likely that we’ll see more applications written with virtualisation in mind.

Meanwhile, there’s also the physical/virtual divide. Some evangelists like to give the impression that the virtual world exists entirely on its own, with the hypervisor/management layer taking the strain of physical management without a need for intervention. But while management tools exist, and sure they can help, they don’t necessarily come cheap.

Some readers have built their own monitoring and provisioning tools - which is fine if you have the luxury of time and the skills to do so.

As an additional complication, not only are management features sometimes duplicated between the virtual and physical worlds, but sometimes they can actually act against each other – for example in database clustering, or the high-availability features of email servers such as Exchange. So it's important to understand what capabilities exist on either side, and make the best use of each.

A number of challenges exist beyond the technical, in areas such as software licensing and vendor support. Historically, applications were not sold with virtualisation in mind and vendors are still working out business models that make sense to all sides. No doubt they will in time.

With all these challenges, the question is how to get there? There does seem to be plenty of commentary from the Reg readership that suggests this is a journey worth embarking on, but some steps are harder than others. Organisations getting unstuck appear to be the ones that have made insufficient preparation, or who didn’t really consider where virtualisation would take them beyond straightforward consolidation.

Thinking ahead does appear to be a prerequisite. It requires technical and non-technical skills and knowledge that may currently be at a premium, but the danger would appear to be making the assumption that "everything will be alright". Thinking ahead also means you can cost things up correctly – you don’t want to hit the wall to find that you need to go cap in hand back to the finance director for more cash.

For now, perhaps the biggest distraction is the suggestion that virtualisation is a stepping stone to something even bigger. Don’t be waylaid by this. Getting the technology platform right, and delivering on it, is far more important than aiming for the stars just yet. ®


Other stories you might like

  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading

Biting the hand that feeds IT © 1998–2022