AMD promises 100 gigabyte/second memory

Open HBM architecture detailed, first devices due in June

Lower power, higher capacity, less PCB real estate: that's the list of wins being touted by AMD for its coming High Bandwidth Memory (HBM) architecture.

The previously-sketchy details of HBM have been filled out in this AMD white paper.

The architecture is slated to roll in some of the Radeon 300 series cards when they land, and is also part of its still-sketchy HPC clustering strategy.

Reductions in power consumption and real estate stem from vertical stacking: through-silicon vias (TSVs) make the die-to-die connection and microbumps provide physical separation.

The TSVs reach past the chips to the logic die, and through an interposer to the package substrate. The interposer provides the fast connection to CPU or GPU, with AMD claiming performance “nearly indistinguishable from on-chip integrated RAM”.

AMD's HBM architecture - stacking silicon

Hothardware reckons the much wider bus – 1024 bits, compared to the paltry 32 bit bus width on GDDR5 chips – means the HBM claims 100 gigabytes/second transfers compared to GDDR5's 28 gigabytes/second.

That performance is achieved at a lower clock speed – 500 MHz for HBM versus 1.75 GHz for GDDR5 – and that, combined with 1.3 V operation (GDDR5 uses 1.5 V) means the new spec claims a 50 per cent power saving, and around three times the bandwidth per watt (35 gigabytes/second/watt versus 10.66 gigabytes/second/watt).

Yet another saving and performance benefit arising from the small size is that the HBM memory will be able to sit on the same substrate as the CPU/GPU, AMD says.

AMD's HBM architecture - on-substrate memory

HMB puts memory on the same substrate as the processor

Hothardware reckons the first examples will be seen in June with AMD's next GPU releases. ®

Biting the hand that feeds IT © 1998–2020