Counting the cost of virtualization

Avoiding nasty surprises


Reader Workshop The benefits of virtualization, particularly in relation to x86 server consolidation, are pretty well recognised. The significant reduction in hardware requirements and operational overheads that many have achieved with their initial deployments will in many cases have easily justified the cost of the virtualization technology itself.

As organisations move beyond the low hanging fruit of obvious consolidation opportunities and begin to work virtualization into the fabric of their infrastructure, however, both the cost of acquisition and the ongoing total cost of ownership come into sharper focus.

Fortunately, from a supplier perspective, the market has moved on from the early phase in which x86 server consolidation was just a one horse race, with VMware pretty much having the market to itself. Today, we have a couple of other significant players in the running, namely Citrix and Microsoft, and with everyone chasing the mainstream opportunity, we are already seeing the premium prices associated with the early market being challenged.

With hypervisor bundling and open source options also in the mix, there is then the question of whether at least basic virtualization capability should be something we have to pay for at all.

Beyond the basics, however, we need to consider how virtualised environments are managed, particularly as deployments are scaled up. One of the big advantages of virtual machines is the speed and ease with which they can be created and provisioned, but this is a double-edged sword. Some have found, for example, that the problem of sprawl quickly raises its ugly head.

This not only starts to reintroduce some of the operational overhead that virtualization was designed to get rid of in the first place, but can also make it difficult to control things from a software license perspective, potentially creating both cost and compliance issues.

Picking up on this last point, it is also clear that some platform software, middleware and application vendors are still struggling to deal effectively with some of the licensing and support issues that arise as we move from deployment on physical to virtual machines. The practice of tying software license fees to physical machine capacity, for example, creates obvious issues when consolidating workloads from several smaller boxes onto one larger one. Situations like this can easily lead to customers being penalised unfairly and/or running into yet more compliance exposures.

And coming back to the operational question, there is then the additional cost of having to recreate problems on a physical machine before support can be obtained from software vendors who refuse to acknowledge the legitimacy of running their product in a virtualised environment.

Perhaps the biggest operational challenge, however, is creating the most appropriate infrastructure for monitoring and managing virtual machines. In all but the smallest of installations, the requirement here is for cost-effective systems management solutions and there are two schools of thought emerging in this area.

The first is based on the premise that virtualization brings with it a unique set of operational requirements that are best dealt with through a dedicated systems management approach that works in parallel to your existing management tools and processes. The second argues that a more holistic approach to systems management is more effective, so the requirement is for existing tools to be extended so they can be used to manage both physical and virtual assets together in a coherent manner.

Either way, there is a potential cost implication in terms of incremental investment required.

In addition to the above, when we consider the expense of training or hiring to acquire the necessary skill sets, the upgrades that are often necessary to the storage and networking infrastructure etc., it is clear that we need to think beyond how much we pay, or don’t pay, for basic enabling software such as the hypervisor when counting the full cost of virtualization.

Against this background, we would be interested in your views and experiences with regard to the true cost of virtualization, particularly as you scale up deployments.

What costs have you incurred so far? Were these higher than expected in some areas or were there any unpleasant surprises? And how are things changing or how do you think they should change looking forward? What could suppliers do to ease the burden? Any thoughts on how your budget structures and accounting mechanisms are coping with the sharing of infrastructure between departments would also be interesting hear.

Give us your feedback in the comment area below. ®

Similar topics


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022