Double-DIMMed XPoint wastes sockets

It makes me cross


Analysis A Xitore white paper compares coming XPoint DIMMs and Xitore's own flash DIMMs, and claims each XPoint DIMM needs a companion DRAM cache DIMM, obviously halving XPoint DIMM density.

The startup has its own tech to push – NVDIMM-X – but, even so, is revealing about XPoint DIMMery.

Doug Finke, director of product marketing at Xitore, claims that an XPoint DIMM, expected in 2017/18, will have a small DRAM cache located on a separate DIMM module. Finke explains the reason for having this DRAM cache*:

The 3D Xpoint DIMM uses an external separate standard DRAM DIMM module as a write cache architecture and is required because the write accesses are much slower than the read accesses in 3D Xpoint media.

Here is a Xitore graphic showing its point.

XPOint_dual_DIMM

We understand from Micron's Nicolas Maigne, EMEA regional business development and marketing, that any Micron QuantX DIMM would be similarly encumbered.

Intel documentation says:

Future 3D XPoint DIMMs may make it practical for main memory to hold terabytes – 6TB (6,000GB) is predicted. 3D XPoint DIMMs will probably have a slower bandwidth than double data rate (DDR) DIMMs, perhaps with their contents cached in MCDRAM (multi-channel DRAM), HBM memory to compensate for this. Such DDR DIMM caches could be about 10 per cent of the capacity of the main memory, so these caches can be 600GB in size – a far cry from the 4KB main memory on the machines from the early 1970s.

If this pairing of XPoint DIMM and a DRAM cache DIMM is correct then several consequences follow:

  1. For every XPoint DIMM two DIMM slots are needed, effectively halving the potential XPoint DIMM capacity on a host.
  2. Memory bus capacity is needed to transfer data from XPoint DIMM to cache DIMM.
  3. XPoint is a backing store to a cache DIMM and effective caching algorithms can make alternative and less expensive backing stores more attractive.

This third point seems to us storage peeps at El Reg to be crucial. Assume a 5 per cent cache miss rate with 1 million IOs using this scheme. Then 50,000 IOs will happen at the speed of the backing store and 950,000 will happen at (cache) DRAM speed. Let's further assume DRAM access speed equals 1 time unit and XPoint access speed equals 5 time units. Then the total access time can be calculated as:

((950,000 x 1) = 950,000) + ((50,00 x 5) = 2,500) = 1,200,000 time units.

The average time per access is 1.2 time units.

Let's employ flash DIMMs instead of XPoint ones, with an access time of 50 time units, 10 times slower, and use the same DRAM caching scheme and hit rate. What is the total access time for 1 million IOs?

((950,000 x 1) = 950,000) + ((50,00 x 50) = 25,000) = 3,450,000 time units.

The average time per access is 3.45 time units. The difference from 1.2 is significant, being almost three times longer. And this leads to a question: how much extra will you pay for XPoint DIMM speed?

Finke has this to say about XPoint cost: "It is touted to have a cost of about one-half that of DRAM, but still 5x that of NAND." Will you pay five times as much for a near 3X speed boost? We imagine that any accompanying DRAM cache DIMM would cost extra, effectively putting up the XPoint DIMM cost. So you might have to pay more than 5X the NAND price."

"Obviously this is a crude calculation with many assumptions but a message is clear: DRAM caching won't hide the latency difference between XPoint and other non-volatile DIMM technology speeds."

"We asked Intel what it thought about this use of a DRAM cache DIMM accompanying XPoint DIMMs and a spokesperson said: "Intel does not comment on unannounced products or rumors and speculation. We’ll have news regarding 3D XPoint-based products in 2017 – stay tuned." We will." ®

* Refer to this Xitore white paper "Comparison of the NVDIMM-X with 3D Xpoint in a DIMM Form Factor" authored by Finke. It discusses more than just the latency issue we have highlighted.

Similar topics


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022