Windows Server 2016 persistent memory support supercharges storage IO

Because it isn't necessarily storage IO at all

13 Reg comments Got Tips?

Analysis The best IO is... no IO. Windows Server 2016 has code to supercharge data storage IO speed by not treating it as IO anymore.

It uses storage-class memory (SCM) as a persistent store, one that is on the memory bus, close to the CPU, and doesn't lose its contents when power is lost, an NVDIMM-N type device. That can be provided by having a Flash DIMM on a host system's memory channel, or a DRAM DIMM with on-DIMM flash backup against power failure.

Future storage class memory media possibilities include Intel and Micron’s 3D XPoint memory.

JEDEC has defined three classes of NVDIMM:

  • NVDIMM-N is a DRAM/Flash hybrid memory module using flash to save the DRAM contents upon power failure
  • NVDIMM-F is for all-flash DIMMs like those made by Diablo/WD (SanDisk)
  • NVDIMM-P is a flash DIMM treated as persistent memory (block-addressable)

JEDEC-compliant N-class Non-volatile DIMMs (NVDIMM-N) storage-class memory devices are supported in Windows with native drivers, starting in Windows Server 2016 and Windows 10 (version 1607).

You can use NVDIMM-N devices as byte-addressable and block-addressable storage in Windows Server 2016. For example SQL Server log files could be stored on NVDIMM-Ns with database records out on disk.

Here is a Microsoft video talking about its use as block storage:


Using NVDIMM-N as block storage in Windows Server 2016


NVDIMM-N deployment

NVDIMM-N is based on DDR4 DRAM plus flash for backup and it exposes a block interface, exactly like disk and SSD devices today, with read and write file semantics above an SCM driver, and load-store semantics below it, where it talks to the media. In a test Microsoft found an NVMe SSD could do 55MBps at 68 microsecs latency. The same simple, single-threaded task, was done at 700MBps and 5 microsecs latency on a DDR4 NVDIMM-N accessed in block mode.

NVDIMM_N block mode is enabled in Windows Server 2016 by a Microsoft-defined _DSM specification, exposing it as a new kind of disk device, and is usable in Storage Spaces – Microsoft's storage virtualisation technology – for striping, mirroring and write-back caching. You can query information about NVDIMM-N devices through PowerShell cmdlets.

Storage-class memory can be driven faster still, using byte addressing. A second Microsoft video talks about using NVDIMM-Ns in byte-addressable mode or DAX mode, with DAX referring to Direct Access volumes. Files are memory-mapped on to these volumes and access to data sidesteps the use of the block-mode software stack, saving time by further reducing latency, providing apps with direct access to the NVDIMM-N hardware (DAX volume).

There is a pre-existing memory-mapped files infrastructure in Windows which is used for this. DAX volumes in an SCMe-aware file system (NTFS - DAX) are identified by a new flag and updates are directly written to the NVDIMM-N with no OS storage stack involvement. NTFS - DAX talks to the SCM driver which then talks directly to the NVDIMM-N media as before. In effect a DAX app, a DirectAccess application, has direct access, using load and store semantics, to a memory-mapped region in the NVDIMM-N media.


Windows server 2016 DAX architecture

Filter drivers relying on detecting and reacting to IO will see no IO and won't work. That means encryption filters. Anti-virus will still work as such code only needs to know when a file was modified – look for a close on the file – and run a scan when that's detected.

DAX volumes are created using one of two commands:

-> Format n: /dax /q

-> Format-Volume -Driveletter n -IsDAX $true

Once created they have a flag value which can be checked by an app looking at the volume handle or a file handle. Obviously then existing app code has to change to accomplish this and use the memory mapping-based read and write (load and store or faster Windows non-temporal instructions) data accesses.

The benefit is that apps get DDR4 memory speed access to persistent storage. There are fewer context switches with highly contested data structures and thousands of CPU cycles are freed up to do more useful work.

In a simple 4K random write test using a single thread against an NVMe SSD the latency was 0.07ms (70 microsecs) and bandwidth about 56MBps. Running the same test in a block mode NVDIMM-N we see 0.01ms (10 microsecs) latency and 580MBps, ten times faster.

Now, running the same test in byte-addressable mode the result is eight times faster still, more than 8GBps at c820ns latency.

Microsoft is working on porting an NVML (Non-Volatile Memory Library) to Windows to help developers adapt apps for DAX volumes. It’s a toolset project maintained on GitHub see and currently works on Linux.

Note that NVDIMM-N drives made completely with flash will be slower than DRAM DIMMs wth flash backup. NVDIMM-N drives made wth XPoint will be faster than flash DIMMs but still slower than DRAM DIMMs; DRAM is faster than XPoint.

Of course, if you have a ton of XPoint NVDIMM-Ns replacing significant disk drive or SSD storage and not so much DRAM then your system will totally fly, but not as fast as a full DRAM or DRAM NVDIMM-N system.

This Microsoft video has information at the end to find out more and it’s worth checking out if you have apps that could benefit from a dose of NVDIMM-N supercharging. The best IO is – wait for it – no IO. ®


Biting the hand that feeds IT © 1998–2020