Per-core licences coming to Windows Server and System Center 2016

We put in more, so you'll pay by the core


Microsoft looks to be moving to per-core licences, rather than per-CPU licences, for Windows Server 2016.

"Directions on Microsoft" chap Wes Miller tweeted links to a "Pricing and licensing FAQ" for Windows Server 2016 and System Center 2016 Standard and Datacenter Editions. Dated December 2015, the meat of the document offers this description of Redmond's future licensing plan:

The licensing of Datacenter and Standard Edition will move from processors to physical cores, which aligns licensing of private and public cloud to a consistent currency of cores and simplifies licensing across multi-cloud environments. Licenses for servers with 8 cores or less per proc will be same price as the 2012 R2 two-proc license price.

Core licenses will be sold in packs of 2 for incremental licenses needed above the required 8 cores per proc. The Standard Edition of Windows Server and System Center will license up to 2 VMs when all of the physical cores on the server are licensed.

The document goes on to explain how you'll be able to buy licences, as follows:

Core licenses will be sold in packs of two licenses. Each processor will need to be licensed with a minimum of 8 cores, which is 4 two-core packs. Each physical server, including 1 processor server, will need to be licensed with a minimum of 16 cores, which is 8 two-core packs. Additional cores can then be licensed in increments of two cores (one two-core pack) for gradual increases in core density growth. Standard Edition provides rights for up to two virtual OSEs when all physical cores on a server are licensed (minimum of 8 cores per proc and 16 cores per server).

Microsoft's documents say the company is doing this to make cores the common currency when licensing Windows Server. Redmond's already made moves in this direction on Azure, so per-core licensing for on-premises deployments means it should be easier to understand hybrid cloud costs.

There will be pain for users. As the graphic below shows, those of you with servers boasting two or four CPUs, and 10 or 20 cores, will require "additional licensing."

Microsoft assessment of new licensing scheme for Windows Server and System Center 2016

The company's justification for the changes is that Windows Server and System Center have lots of new features. So cough up, even if you don't plan on using them. Redmond can also point to the fact that it offers per-core licensing for other on-premises software, such as SQL Server and BizTalk, so it's actually doing you a favour by being more consistent.

Another nugget of information revealed in the FAQ is that Windows Nano Server is included as part of the licensing of the edition from which it is deployed.

You don't need to do anything about this for now. Microsoft says that "Customers will then begin transacting Windows Server and System by core-based licenses at the time of their software assurance renewal or at the time of net new license purchase outside of any Microsoft agreements."

In case Microsoft takes down the PDFs referred to in the Tweets, we've popped them into Dropbox here and here. ®


Other stories you might like

  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading

Biting the hand that feeds IT © 1998–2022