Hands on with Windows Server 2016 Containers

Containers, Docker support are big new features, but the current preview is rough

First Look Microsoft has released Technical Preview 3 of Windows Server 2016, including the first public release of Windows Server Containers, perhaps the most interesting new feature.

A container is a type of virtual machine (VM) that shares more resources than a traditional VM.

“For efficiency, many of the OS files, directories and running services are shared between containers and projected into each container’s namespace,” said Azure CTO Mark Russinovich.

Containers are therefore lightweight, so you can run more containers than VMs on a host server. They are also less flexible. Whereas you can run Linux in a VM running on Windows, that idea makes no sense for a container, which shares operating system files with its host.

Containers have existed for a long time on Unix-like operating systems, but their usage for application deployment increased following the release of Docker as an open source project in early 2013.

Docker provides a high-level API and tools for managing and deploying Linux container images, and Docker Hub is a public repository of container images.

The popularity of Docker has helped to promote a distinctive approach to application deployment, where developers focus on creating container images which can be deployed multiple times.

The live instances are disposable, and you update an application by updating and redeploying the images. Each image may implement a relatively small piece of functionality, which fits well with a style of software architecture called microservices.

Containers are now a Windows Server feature

Containers are now a Windows Server feature

Windows developers have missed out on the container fun, but Microsoft is putting that right in Server 2016 and on its Azure cloud platform. Container support is now built into Windows, with two different types on offer:

  • Windows Server Containers: Container VMs use shared OS files and memory
  • Hyper-V Containers: VMs have their own OS kernel files and memory

The current technical preview does not support Hyper-V containers. What is the difference though between a Hyper-V Container and a plain old Hyper-V CM?

“Besides the optimizations to the OS that result from it being fully aware that it’s in a container and not a physical machine, Hyper-V Containers will be deployed using the magic of Docker and can use the exact same packages that run in Windows Server Containers,” said Russinovich, without giving details.

Microsoft has added nested virtualisation in Hyper-V in this release, so that you will be able to use Hyper-V Containers even if the host is itself a VM.

The main reason for using a Hyper-V container is for security. It is better isolated, and Microsoft regards it as a “trust boundary”, whereas this is not the case for Windows Server Containers. Despite this, even without Hyper-V containers should be isolated. “If anyone found a way to escape a container, that’s a big deal” said Senior Program Manager Taylor Brown in a training video.

In addition, Microsoft has ported Docker to Windows. This means you can use the Docker API and tools with Windows containers. It does not mean that existing Linux-based Docker images will run on Windows, other than via Linux VMs as before.

Containers in Server 2016 TP3

Judging by the preview just released, it is early days for Windows Server containers. Microsoft has published sketchy documentation, in the form of walkthroughs that demonstrate creating and running a trivial container using command line tools, either Docker or PowerShell.

Visual Studio tools push developers towards Azure

Visual Studio tools push developers towards Azure

There is also a walkthrough of deploying to a Docker container on Azure, using Visual Studio Tools for Docker, and although these tools also make provision for non-Azure hosts, the walkthrough does not cover these scenarios. The tools encourage Azure deployment, making this the default choice with alternatives only available if you check a box for “Custom Docker Host”.

There are several potentially confusing issues confronting those experimenting with this technology. One is that Docker containers are distinct from Windows Server containers, although both appear use the same underlying technology. You cannot manage a Docker container using the PowerShell library for Windows Server containers, nor vice versa. Microsoft says this is NOT the long term plan.

Second, Windows container images are currently based on Server Core, which is Windows Server without a GUI. Presumable Nano Server, an even more cut down edition, will also be supported.

Third, Microsoft has a lot still to do and has posted a “Work in Progress” FAQ describing what works and what does not. This includes advice like, “Commands sporadically fail – try again”.

Container VMs currently do not support everything that works in Windows Server Core. ASP.NET 4.5 does not run, for example, so you have to use the new ASP.NET 5.0, which is also in preview. PHP works with Apache but not in Microsoft’s IIS (Internet Information Server). Microsoft has a list of Windows features you can install but adds that “Many do not function once they are installed.”

The documentation also states that the system files in a Windows Container image must currently exactly match those in the host OS “in respect to build and patch level. A mismatch will lead to instability and or unpredictable behaviour.”

This requirement defeats some of the value of containers, which are meant to avoid dependency issues. It is a tricky problem for Microsoft to address, but essential that it is addressed, since otherwise patching the OS could break the containers it hosts.

When it comes to Docker, a number of key commands are not yet supported. Commands that fail include Docker commit, Docker load, Docker pause, Docker pull and Docker restart.

Such issues are not surprising in a public preview, but worth mentioning since with so many limitations many will prefer to wait for a more complete release before putting the technology to the test.

Next page: Hands on

Similar topics

Other stories you might like

  • Yet again, Cream Finance skimmed by crooks: $130m in crypto assets stolen

    Third time's the unlucky charm for loan outfit

    Decentralized finance biz Cream Finance became further decentralized on Wednesday with the theft of $130m worth of crypto assets from its Ethereum lending protocol.

    Cream (cream.finance and not creamfinance.com) reported the loss via Twitter, the third such incident for the loan platform this year.

    "Our Ethereum C.R.E.A.M. v1 lending markets were exploited and liquidity was removed on October 27, 1354 UTC," the Taiwan-based biz said. "The attacker removed a total of ~$130m USD worth of tokens from these markets, using this address. No other markets were impacted."

    Continue reading
  • OpenID-based security features added to GitHub Actions as usage doubles

    Single-use tokens and reusable workflows explained at Universe event

    GitHub Universe GitHub Actions have new security based on OpenID, along with the ability to create reusable workflows, while usage has nearly doubled year on year, according to presentations at the Universe event.

    The Actions service was previewed three years ago at Universe 2018, and made generally available a year later. It was a huge feature, building automation into the GitHub platform for the first time (though rival GitLab already offered DevOps automation).

    It require compute resources, called runners, which can be GitHub-hosted or self-hosted. Actions are commands that execute on runners. Jobs are a sequence of steps that can be Actions or shell commands. Workflows are a set of jobs which can run in parallel or sequentially, with dependencies. For example, that deployment cannot take place unless build and test is successful. Actions make it relatively easy to set up continuous integration or continuous delivery, particularly since they are cloud-hosted and even a free plan offers 2,000 automation minutes per month, and more than that for public repositories.

    Continue reading
  • REvil gang member identified living luxury lifestyle in Russia, says German media

    Die Zeit: He's got a Beemer, a Bitcoin watch and a swimming pool

    German news outlets claim to have identified a member of the infamous REvil ransomware gang – who reportedly lives the life of Riley off his ill-gotten gains.

    The gang member, nicknamed Nikolay K by Die Zeit newspaper and the Bayerische Rundfunk radio station, reportedly owns a €70,000 watch with a Bitcoin address engraved on its face and rents yachts for €1,300 a day whenever he goes on holiday.

    "He seems to prefer T-shirts from Gucci, luxurious BMW sportscars and large sunglasses," reported Die Zeit, which partly identified him through social media videos posted by his wife.

    Continue reading

Biting the hand that feeds IT © 1998–2021