This article is more than 1 year old

HGST sets hearts aflutter with second sexy PCM demo

Plenty of sizzle so far, but no product steak to speak of yet

HGST is being a sexy beast and strutting its Phase Change Memory stuff once again, at the Flash Memory Summit.

Sex sells in the flash biz and HGST thinks it has a hot little number – again – with its Phase Change Memory technology. A year on from its 3 million IOPS/1.5 microsecs latency demo of a PCIe-connected PCM SSD, it is showing an RDMA over InfiniBand connected PCM SSD with less than two microseconds round-trip access latency for 512B reads, and throughput exceeding 3.5 GB/s for 2KB block sizes.

The background pitch is that volatile DRAM is hugely expensive in its electrical use, while HGST’s non-volatile PCM card is electrically anorexic, while providing DRAM-like speed.

The InfiniBand came from Mellanox, and its marketing VP Kevin Deierling had a canned quote saying: “In the future, our goal is to support PCM access using both InfiniBand and RDMA over Converged Ethernet (RoCE) to increase the scalability and lower the cost of in-memory applications.”

HGST is calling its set-up a persistent memory fabric, describing it as a PCM-based, RDMA-enabled in-memory compute cluster architecture. It says the host servers don’t need either BIOS changes or app software mods.

The demo at the Flash Memory Summit shows that RDMA-connected PCM storage is faster than NAND would be, but you can’t buy it.

HGST_PCM_RDMA

What is HGST up to here? Last year its PCM chips came from Micron. Let’s assume they’re Micron ones again and HGST’s interest is in selling PCM storage cards. They’re obviously not ready for prime-time, because HGST isn’t selling PCM-based storage cards.

It appears to be aiming to develop an RDMA-enabled, InfiniBand or fast Ethernet-linked PCM box that can be added to several servers’ memory address spaces and so help propel in-memory compute to the foreground.

Memory mapping of remote PCM using the Remote Direct Memory Access (RDMA) protocol over networking infrastructures, such as Ethernet or InfiniBand, enables a seamless, wide-scale deployment of in-memory computing. This network-based approach allows applications to harness the non-volatile PCM across multiple computers to scale out as needed.

HGST CTO Steve Campbell’s canned quote said: “Our work with Mellanox proves that non-volatile main memory can be mapped across a network with latencies that fit inside the performance envelope of in-memory compute applications.”

Diablo Technologies’ Memory1 is in-server DIMM-connected NAND acting as a DRAM substitute and expander. HGST’s persistent memory fabric is a networked and presumably more scalable DRAM substitute and expander. Is it faster in access than Memory1? We don’t yet know.

Intel and Micron’s 3D XPoint memory is faster than NAND but not as fast as DRAM, and will be/should be/could be ready in 2016. How it will compare price and performance-wise to PCM will be fascinating to understand.

For now, HGST’s demo is a PCM tease. If you like being teased, then hop along to HGST’s booth #645-647 at the 2015 Flash Memory Summit in the Santa Clara Convention Centre, Santa Clara, CA, on August 11-13, 2015 and take a look at its sexy phase-changing beast. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like