Micron, SK-Hynix's shipping bandwidth-boosting LPDDR5 for on-device AI

So long as you don’t mind that it's soldered down

Memory vendors Micron and SK Hynix this week began shipping their first LPDDR5 memory modules capable of achieving speeds up to 9,600MT/s.

For reference, that's technically 12 percent faster than the LPDDR5 spec, and between 30-50 percent faster than the memory found in most thin and light notebooks.

That speed translates into higher memory bandwidth, something that's become increasingly important as chipmakers have boosted core counts and embedded ever faster GPUs, neural processing units, and other co-processors into their system on chips (SoCs).

For instance, with the announcement of Qualcomm's Snapdragon 8 Gen 3 system on chip (SoC) this week, the silicon slinger is betting on a future where customers run machine learning and large language models, like Meta's Llama 2 or Stable Diffusion, entirely on their personal devices.

Most GPUs and accelerators used to run AI workloads use speedy GDDR or high-bandwidth memory (HBM) modules. However in a slim laptop, tablet, or smartphone this isn't always practical, and the CPU, GPU and other co-processors must often share a common pool of DDR5.

One of the techniques to prevent bandwidth from becoming a bottleneck is co-packaging memory alongside their compute dies. Apple's M-series processors are a prime example of that approach, with the memory modules on the same die as the CPU and co-processors.

Apple's M2 Max — for the moment, its most powerful notebook SoC — can deliver 400 GB/s of memory bandwidth to the CPU and GPU. To put that in perspective, that's just shy of the 460GB/s of bandwidth AMD's Epyc 4 datacenter CPU can manage when all 12 of its memory channels are full up.

If Apple were to move to Micron or SK-Hynix's latest 9,600 MT/s memory, the company might just be able to eke out another 200GB/s of bandwidth.

Intel is also rumored to be working on a version of its Meteor Lake processors with on-package LPDDR memory. However, it's not in space constrained mobile devices that we're seeing chipmakers go this route. Nvidia's 144-core Grace CPU Superchip uses LPDDR5X memory to keep the processors fed with 1TB/s of bandwidth.

One of the downsides to LPDDR memory is you can't really upgrade the device by tossing in a higher capacity SODIMM. This isn't really a problem for smartphones and tablets but may be a turn off for prospective laptop buyers. LPDDR modules are designed to be soldered down to the motherboard or co-packaged alongside the SoC, so taking advantage of LPDDR5's higher operating frequencies means forgoing upgradability.

Having said that, we won't have to wait long for SK-Hynix and Micron's latest memory modules hit the market. The companies claim that Qualcomm's Snapdragon 8 Gen 3 will be among the first to support their 9,600 MT/s memory modules. ®

More about


Send us news

Other stories you might like