MetaRAM now pumping 288GB of memory into Intel boxes

Triple stuffed


Super-charging memory shop MetaRAM has started talking up its beefy DDR3 modules.

MetaRAM's top customer Hynix has already taken delivery of the DDR3 MetaSDRAM, which allows server customers to pack far more memory inside their standard systems. For example, Hynix is hyping "the world's first" 16GB 2-rank DIMMs, which it demonstrated this week at the Intel Developer Forum. And it's going to ship 8GB 2-rank DIMMs based on the MetaRAM technology as well.

All told, you're looking at, oh, a tripling of the amount of memory than can slot into workstations and servers.

MetaRAM is led by Fred Weber, the former CTO at AMD. The company launched in February with its unique brand of memory stuffing technology.

To shove more memory on each DIMM, companies such as Hynix pick up the MetaSDRAM chipset, which slots in between a memory controller and DRAM. As a result, memory makers can pack up up to four times as many DRAMs onto standard DIMMs.

"The major benefit of DDR3 MetaSDRAM technology is that it enables this larger memory capacity without negatively impacting the operating frequency of the DDR3 memory channel like standard R-DIMMs," MetaRAM said in a statement. "It is the only technology that has been shown publicly to run 24GB of DDR3 SDRAM in a channel at 1066 million transactions per second (MT/s).

"Using three 16GB DIMM modules, users can achieve 48GB per channel, while other cost-effective solutions max out at 16GB per channel." MetaRAM, which sells DDR2 technology today, is offering up 4GB, 8GB and 16GB modules to interested memory makers. The 4GB and 8GB units go into full product in Oct., while the 16GB unit hits the streets in Dec.

You can expect to see Intel-based servers with between 144GB and 288GB of memory thanks to the technology. ®

Similar topics

Broader topics


Other stories you might like

  • Microsoft fixes under-attack Windows zero-day Follina
    Plus: Intel, AMD react to Hertzbleed data-leaking holes in CPUs

    Patch Tuesday Microsoft claims to have finally fixed the Follina zero-day flaw in Windows as part of its June Patch Tuesday batch, which included security updates to address 55 vulnerabilities.

    Follina, eventually acknowledged by Redmond in a security advisory last month, is the most significant of the bunch as it has already been exploited in the wild.

    Criminals and snoops can abuse the remote code execution (RCE) bug, tracked as CVE-2022-30190, by crafting a file, such as a Word document, so that when opened it calls out to the Microsoft Windows Support Diagnostic Tool, which is then exploited to run malicious code, such spyware and ransomware. Disabling macros in, say, Word won't stop this from happening.

    Continue reading
  • Intel’s Falcon Shores XPU to mix ‘n’ match CPUs, GPUs within processor package
    x86 giant now has an HPC roadmap, which includes successor to Ponte Vecchio

    After a few years of teasing Ponte Vecchio – the powerful GPU that will go into what will become one of the fastest supercomputers in the world – Intel is sharing more details of the high-performance computing chips that will follow, and one of them will combine CPUs and GPUs in one package.

    The semiconductor giant shared the details Tuesday in a roadmap update for its HPC-focused products at the International Supercomputing Conference in Hamburg, Germany.

    Intel has only recently carved out a separate group of products for HPC applications because it is now developing versions of Xeon Scalable CPUs, starting with a high-bandwidth-memory (HBM) variant of the forthcoming Sapphire Rapids chips, for high-performance kit. This chip will sport up to 64GB of HBM2e memory, which will give it quick access to very large datasets.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • Apple gets lawsuit over Meltdown and Spectre dismissed
    Judge finds security is not a central feature of iDevices

    A California District Court judge has dismissed a proposed class action complaint against Apple for allegedly selling iPhones and iPads containing Arm-based chips with known flaws.

    The lawsuit was initially filed on January 8, 2018, six days after The Register revealed the Intel CPU architecture vulnerabilities that would later come to be known as Meltdown and Spectre and would affect Arm and AMD chips, among others, to varying degrees.

    Amended in June, 2018 the complaint [PDF] charges that the Arm-based Apple processors in Cupertino's devices at the time suffered from a design defect that exposed sensitive data and that customers "paid more for their iDevices than they were worth because Apple knowingly omitted the defect."

    Continue reading

Biting the hand that feeds IT © 1998–2022