The present is virtual, the future should be too

Containers are visitors from hyperscale-land. They should respect your ways when you invite them in


Register Debate Welcome to the latest Register Debate in which writers discuss technology topics, and you – the reader – choose the winning argument. The format is simple: we propose a motion, the arguments for the motion will run this Monday and Wednesday, and the arguments against on Tuesday and Thursday.

During the week you can cast your vote on which side you support using the embedded poll, choosing whether you're in favor or against the motion. The final score will be announced on Friday, revealing whether the for or against argument was most popular. It's up to our writers to convince you to vote for their side.

This week's motion is: Containers will kill virtual machines

And now, today, arguing AGAINST the motion is CHRIS MELLOR, the editor of our enterprise storage sister publication, Blocks & Files...

The history of the data centre is a long drive to efficiency. Bare metal servers waited for I/O to finish before continuing other work, so multi-tasking operating systems were invented to give servers the power to run other tasks while they waited for I/O to complete.

Multi-tasking created demand for more servers, but all too often those machines were tightly coupled to single applications and operating systems and if they weren’t busy, the server was underutilized.

Virtualisation rescued servers from that underutilization and meant organisations could run fewer but bigger physical servers and myriad virtual machines (VMs). Hypervisors could load VMs with different operating systems so that one physical server could run Windows, Unix and Linux environments simultaneously. Each VM was given the resources it needed and everything was rosy - for a while.

Kubernetes is an application like any other. It's better off virtualized.

Then came hyperscale services running on millions of servers, a situation that made it critical to extract every last cycle of server power with as little wasted or idle as possible.

VMs didn’t work well at hyperscale. Enter containers and micro-services, which have become the base execution unit for hyperscale services and recently for more mainstream software developed using the same techniques employed by hyperscale operations.

So now we have two kinds of data centres used by businesses and other organisations: VM-centric data centres and containerized data centres.

We also have two ways of producing applications.

It’s confusing and complex.

What should we do?

One option is to have public clouds convert to VM-centric operations, but that won’t happen because hyperscale operators’ resource recovery models need containers. VMs as their as the core execution unit is too wasteful of IT resources.

Another option is for the on-premises world to convert to microservices, containerize everything and run like public clouds. But the complexity and expense involved is out of proportion for non-hyperscale operations.

The third choice is to go hybrid, to combine the different on-premises and public cloud worlds under an abstraction layer that presents a unified and coherent environment to run applications.

Brilliant idea. Then the on-premises world could carry on doing what it’s doing; running virtual machines in virtualized servers; and the public clouds could carry on running containers.

One problem; where is this abstraction layer?

It already exists. It’s called virtualization – because a virtualized server can run containers.

What strange magic is this? The tools that manage containers – like Kubernetes – are an application like any other. They’re better off virtualized. Containers themselves share an operating system. Any instance of an OS is better off virtualized.

Further, we don’t need containers to have on-premises-to-public cloud application mobility.

Virtual machines are already mobile. VMware, Microsoft and all the big clouds offer VM migration tools and services.

VMware, which dominates the virtual server market, has relationships that VMs it created run in AWS, Azure, Google Cloud, Oracle Cloud and Alibaba Cloud.

Hyperscale services extracting every last cycle of server power critical

Because VMs are already mobile we don’t need to containerise our applications to enjoy multi-way mobility between public clouds and on-premises data centres. And even if you do decide to develop with containers, they need the resilience, security and manageability that virtual machines afford.

Get with the virtualised server program container purists. They’re mature, reasonable, common sense and low friction. ®

Cast your vote below. We'll close the poll on Thursday night and publish the final result on Friday. You can track the debate's progress here.

JavaScript Disabled

Please Enable JavaScript to use this feature.

Similar topics


Other stories you might like

  • Google sours on legacy G Suite freeloaders, demands fee or flee

    Free incarnation of online app package, which became Workplace, is going away

    Google has served eviction notices to its legacy G Suite squatters: the free service will no longer be available in four months and existing users can either pay for a Google Workspace subscription or export their data and take their not particularly valuable businesses elsewhere.

    "If you have the G Suite legacy free edition, you need to upgrade to a paid Google Workspace subscription to keep your services," the company said in a recently revised support document. "The G Suite legacy free edition will no longer be available starting May 1, 2022."

    Continue reading
  • SpaceX Starlink sat streaks now present in nearly a fifth of all astronomical images snapped by Caltech telescope

    Annoying, maybe – but totally ruining this science, maybe not

    SpaceX’s Starlink satellites appear in about a fifth of all images snapped by the Zwicky Transient Facility (ZTF), a camera attached to the Samuel Oschin Telescope in California, which is used by astronomers to study supernovae, gamma ray bursts, asteroids, and suchlike.

    A study led by Przemek Mróz, a former postdoctoral scholar at the California Institute of Technology (Caltech) and now a researcher at the University of Warsaw in Poland, analysed the current and future effects of Starlink satellites on the ZTF. The telescope and camera are housed at the Palomar Observatory, which is operated by Caltech.

    The team of astronomers found 5,301 streaks leftover from the moving satellites in images taken by the instrument between November 2019 and September 2021, according to their paper on the subject, published in the Astrophysical Journal Letters this week.

    Continue reading
  • AI tool finds hundreds of genes related to human motor neuron disease

    Breakthrough could lead to development of drugs to target illness

    A machine-learning algorithm has helped scientists find 690 human genes associated with a higher risk of developing motor neuron disease, according to research published in Cell this week.

    Neuronal cells in the central nervous system and brain break down and die in people with motor neuron disease, like amyotrophic lateral sclerosis (ALS) more commonly known as Lou Gehrig's disease, named after the baseball player who developed it. They lose control over their bodies, and as the disease progresses patients become completely paralyzed. There is currently no verified cure for ALS.

    Motor neuron disease typically affects people in old age and its causes are unknown. Johnathan Cooper-Knock, a clinical lecturer at the University of Sheffield in England and leader of Project MinE, an ambitious effort to perform whole genome sequencing of ALS, believes that understanding how genes affect cellular function could help scientists develop new drugs to treat the disease.

    Continue reading

Biting the hand that feeds IT © 1998–2022