Analysis Super-fast storage array access looks to be coming, with persistent memory front-end caches in the accessing servers.
Persistent memory (PMEM), also known as storage-class memory, is non-volatile solid-state storage in DIMM format, with DRAM-like access speeds but, hopefully, prices somewhere between DRAM and NAND. It’s used by host systems with memory load-store commands rather than via a time-consuming storage IO stack.
Just as NVMe-over-fabrics block array access is emerging from the mainstream storage suppliers, with its 100 microseconds or so data access latency, 10 times faster storage, with single-digit microsecond access latencies, is also on its way, spearheaded by Optane DIMMs.
NetApp has already gone in that direction with its MAX Data product.
It employs a persistent memory tier in host servers, based on Intel Optane DIMMs and using, we presume, Intel Cascade Lake SP Xeons, due by the end of the year and supporting Optane DIMMs.
The firm's acquired Plexistor software technology shortcuts the host server OS I/O stack and presents the persistent memory as a POSIX interface system to applications that are said not to need changing. They get what looks like four to five microsecond access latency when doing IO to the Optane DIMMs.
The Plexistor software tiers cold data out to an all-flash NetApp backend array using an NVMe-oF transport, and brings in any missing data.
In effect the persistent memory acts as a front-end cache for the backend array and radically accelerates data access speed, except for cache misses of course.
If widely adopted, this presages an era of much faster block storage access.
Give me a P-M-E-M?
Rob Peglar, president of Advanced Computation and Storage LLC, tells us: "It's a realistic view. Such use of persistent memory to augment/enhance block access does presage a different era – than we're in right now, with SSDs."
Howard Marks, chief scientist at DeepStorage.net, said: "Is it realistic that Plexistor managing PMEM (DRAM, Optane DC or other) could deliver 1x µs latencies? Sure, but that would only be for 'cache hits' (it's not really a cache hence quotes) accessing data that's not in the PMEM tier will have 100µs latency.
"It could be set up with the local PMEM as the 'storage' tier with the external array for snaps, log (to recover from a node failure) etc, and have 1x µsec latency but that would limit database size to the size of the PMEM layer. With 512GB Optane DC that's 4TB to 6TB/2 socket node or so.
"That's also reasonable but only as a stopgap before moving to in-memory databases that manage the PMEM directly."
Both Peglar and Marks see PMEM caching/tiering as a stopgap before moving to fully in-memory databases with "memory" possibly meaning a combination of DRAM and PMEM.
Impact on other suppliers
What about other storage industry SAN suppliers of such super-fast storage? Would they have to employ some form of persistent memory caching in accessing client systems to match NetApp MAX Data speed?
Marks said: "Is this mainstreamable even in the 'go very fast' end of the market? First it's Linux-only, that's where the HPC, high frequency trading, etc, runs, but that means it's stuck in that niche.
"The bigger question is how much demand is there for 10µsec latency in a 125µsec-is-normal world, and how fast do those applications move from storage-dependent databases like Mongo to in-memory databases like HANA or Aerospike?
"I think this gets NetApp bragging rights, new respect as a go-fast vendor, which they've never really been, and a foot in the door at new accounts but, two years after they start shipping, the market dries up as customers move to in-memory."
Peglar said: "Current SAN suppliers will, in all probability, begin (or complete) the integration of persistent memory into their architecture, most likely as a faster tier, and/or cache layer. This will, by its nature, cause SAN suppliers to focus on host-based capability, through a combination of hardware and software, rather than strictly array-based capability, which will continue to evolve as NVMe and NVMe-oF continues to mature."
The use of host persistent memory is, for Peglar, a milestone on a longer journey to IO elimination: "Having said that, I look forward to further development of systems which actually help to, or completely eliminate IO, rather than just make it faster, i.e. with reduced latency, greater throughput, etc. by the use of persistent memory in true memory semantics, pure CPU load/store. This is the ultimate benefit of persistent memory."
PMEM caching/tiering adoption
The Register's storage desk expects other mainstream enterprise storage suppliers – such as Dell EMC, HPE, Hitachi and Pure Storage – to adopt client PMEM caching/tiering in their storage architectures.
NVMe-oF startups – such as Apeiron, E8, Excelero, Kaminario and Pavilion Data Systems – can also be expected to add client system PMEM acceleration into their development roadmaps.
Intel hands first Optane DIMM to Google, where it'll collect dust until a supporting CPU arrivesREAD MORE
It would not be a surprise for hyperscale service suppliers such as AWS, Azure, eBay, Facebook and the Google Cloud Platform to use the same architecture. Intel has ceremonially presented its first product Optane DIMM to Google.
We also see scope for its adoption by hyperconverged system vendors. The scope for accelerating virtual SAN access across a hyperconverged cluster using PMEM caching looks somewhat obvious.
Nutanix, after all, bought PernixData with its hypervisor caching technology and so, we would think, has a host caching technology mindset ready to be fired up.
El Reg predicts that client PMEM caching/tiering will spread across the storage industry like wildfire once persistent memory DIMM products become available and affordable.
A PMEM caching whirlwind is coming and suppliers who don't adopt this caching/tiering technology could be left in tears. ®