This article is more than 1 year old

Non-binary DDR5 is finally coming to save your wallet

Need a New Year's resolution? How about stop paying for memory you don't need

We're all used to dealing with system memory in neat factors of eight. As capacity goes up, it follows a predictable binary scale doubling from 8GB to 16GB to 32GB and so on. But with the introduction of DDR5 and non-binary memory in the datacenter, all of that's changing.

Instead of jumping straight from a 32GB DIMM to a 64GB one, DDR5, for the first time, allows for half steps in memory density. You can now have DIMMs with 24GB, 48GB, 96GB, or more in capacity.

The added flexibility offered by these DIMMs could end up driving down system costs, as customers are no longer forced to buy more memory than they need just to keep their workloads happy.

What the heck is non-binary memory?

Non-binary memory isn't actually all that special. What makes non-binary memory different from standard DDR5 comes down to the chips used to make the DIMMs.

Instead of the 16Gb — that's gigabit — modules found on most DDR5 memory today, non-binary DIMMs use 24Gb DRAM chips. Take 20 of these chips and bake them onto a DIMM, and you're left with 48GB of usable memory after you take into account ECC and metadata storage.

According to Brian Drake, senior business development manager at Micron, you can usually get to around 96GB of memory on a DIMM before you’re usually forced to resort to advanced packaging techniques.

Using through-silicon via (TSV) or dual-die packaging, DRAM memory vendors can achieve much higher densities. Using Samsung's eight-layer TSV process, for example, the chipmaker could achieve densities as high as 24GB per DRAM module for 768GB per DIMM.

To date, all of the major memory vendors, including Samsung, SK-Hynix, and Micron, have announced 24Gb modules for use in non-binary DIMMs.

The cost problem

Arguably the biggest selling point behind non-binary memory comes down to cost and flexibility.

"For a typical datacenter, cost of memory is significant and can be even higher than cost of compute," CCS Insights analyst Wayne Lam told The Register.

As our sister site The Next Platform reported earlier this year, memory can account for as much as 14 percent of a server’s cost. And in the cloud, some industry pundits put that number closer to 50 percent.

"Doubling of DRAM capacity — 32GB to 64GB to 128GB — now produces large steps in cost. The cost per bit is fairly constant, therefore, if you keep doubling, the cost increments becomes prohibitively expensive," Lam explained. "Going from 32GB to 48GB to 64GB and 96GB offers gentler price increments."

Take this thought experiment as an example:

Say your workload benefits from having 3GB/thread. Using a 96-core AMD Epyc 4-based system with one DIMM per channel, you'd need at least 576GB of memory. However, 32GB DIMMs would leave you 192GB short, while 64GB DIMMs would leave you with just as much in surplus. You could drop down to 10 channels and get closer to your target, but then you're going to take a hit to memory bandwidth and pay extra for the privilege. And this problem only gets worse as you scale up.

In a two-DIMM-per-channel configuration — something we'll note AMD doesn't support on Epyc 4 at launch — you could use mixed capacity DIMMs to narrow in on the ideal memory-to-core ratio, but as Drake points out, this isn't a perfect solution.

"Maybe the system has to down clock that two-DIMM-per-channel solution, so it can't run the maximum data rate. Or maybe there's a performance implication of having uneven ranks in each channel," he said.

By comparison, 48GB DIMMs will almost certainly cost less, while allowing you to hit your ideal memory-to-core ratio without sacrificing on bandwidth. And as we’ve talked about in the past, memory bandwidth matters a lot, as chipmakers continue to push the core counts of their chips ever higher.

The calculus is going to look different depending on your needs, but at the end of the day, non-binary memory offers greater flexibility for balancing cost, capacity, and bandwidth.

And there aren't really any downsides to using non-binary DIMMs, Drake said, adding that, in certain situations, they may actually perform better.

What about CXL?

Of course non-binary memory isn't the only way to get around the memory-core ratio problem.

"Technologies such as non-binary capacities are helpful, but so is the move to CXL memory — shared system memory — and on-chip high-bandwidth memory," Lam said.

With the launch of AMD's Epyc 4 processors last fall and Intel’s upcoming Sapphire Rapids processors next month, customers will soon have another option for adding memory capacity and bandwidth to their systems. Samsung and Astera Labs have both shown off memory-expansion modules, and Marvell plans to offer controllers for similar products in the future.

However, they are less an alternative to non-binary memory and more of a complement to them. In fact, Astera Lab's expansion modules should work just fine with 48GB, 96GB, or larger non-binary DIMMs. ®

More about

TIP US OFF

Send us news


Other stories you might like