Samsung has advanced its plans to relieve devices of the tedious chore that is moving data out of memory and into a processor – by putting more processing power into memory. It's already running in servers and should become a standard of sorts next year.
The Korean giant's efforts use its very fast Aquabolt high-bandwidth memory (HBM) architecture – tech to which the company added processing-in-memory (PIM) capabilities in February 2021. Samsung hasn't revealed a lot of detail about its PIM implementation, but The Register understands it involves placing a processing unit with unspecified specs alongside each cell array inside memory.
In early 2021 Samsung announced it had HBM and PIM working together in the same piece of silicon. Yesterday it announced it's made the two work inside a Xilinx Virtex Ultrascale+ (Alveo) AI accelerator, and also advanced HBM-PIM to a point at which it is ready for deployment inside DIMMS and mobile memory.
Using HBM-PIM in the Xilinx device delivered 2.5x system performance, while reducing energy consumption by 60 per cent.
Samsung's now talking up the "AXDIMM" – accelerated DIMM – and says units are currently being tested on customer servers.
The company claims that an AXDIMM "can perform parallel processing of multiple memory ranks (sets of DRAM chips) instead of accessing just one rank at a time". Test results suggest "approximately twice the performance in AI-based recommendation applications and a 40 per cent decrease in system-wide energy usage".
SAP gave the tech a thumbs-up in Samsung's canned statement, saying it likes the idea of speeding up its in-memory databases.
- Micron: We're pulling the plug on 3D XPoint. Anyone in the market for a Utah chip factory?
- SK Hynix admits to DRAM defects, smacks down rumour it botched big batches
- In South Korea the new normal future of work is ... a 52-hour work week! (Down from 68)
- Smuggler caught with 256 Intel Core processors wrapped around him in cling film
Samsung has hinted at wider collaborations, too, stating that it "plans to expand its AI memory portfolio by working with other industry leaders to complete standardization of the PIM platform in the first half of 2022.
"The company will also continue to foster a highly robust PIM ecosystem in assuring wide applicability across the memory market," according to its canned statement.
Which sounds rather tasty, because who doesn't want faster and more flexible memory that changes the way servers behave?
Actually, that's a tougher question than it sounds, given that software won't be aware of AXDIMMs on day one – or even year one. The struggle to get developers interested is one reason that Intel's Optane storage-class memory hasn't set the world on fire. And Optane, too, promised to make servers faster and more elegant.
It's lazy journalism to end a story by saying time will tell if a new product succeeds. Absent much detail on Samsung's tech and proposed partnerships, it's harder to say much more at this stage. ®