Prepare to have your minds blown, storage industry. 5 words: Client access Optane DIMM caching

Time to ride spinning-up persistent memory caching whirlwind?

Analysis Super-fast storage array access looks to be coming, with persistent memory front-end caches in the accessing servers.

Persistent memory (PMEM), also known as storage-class memory, is non-volatile solid-state storage in DIMM format, with DRAM-like access speeds but, hopefully, prices somewhere between DRAM and NAND. It’s used by host systems with memory load-store commands rather than via a time-consuming storage IO stack.

The prime PMEM DIMM technologies are Intel's Optane 3D XPoint, Micron's presumed QuantX equivalent, Samsung's Z-SSD, and phase change memory possibilities from Western Digital and other suppliers.

Just as NVMe-over-fabrics block array access is emerging from the mainstream storage suppliers, with its 100 microseconds or so data access latency, 10 times faster storage, with single-digit microsecond access latencies, is also on its way, spearheaded by Optane DIMMs.

NetApp has already gone in that direction with its MAX Data product.

It employs a persistent memory tier in host servers, based on Intel Optane DIMMs and using, we presume, Intel Cascade Lake SP Xeons, due by the end of the year and supporting Optane DIMMs.

The firm's acquired Plexistor software technology shortcuts the host server OS I/O stack and presents the persistent memory as a POSIX interface system to applications that are said not to need changing. They get what looks like four to five microsecond access latency when doing IO to the Optane DIMMs.

The Plexistor software tiers cold data out to an all-flash NetApp backend array using an NVMe-oF transport, and brings in any missing data.

In effect the persistent memory acts as a front-end cache for the backend array and radically accelerates data access speed, except for cache misses of course.

If widely adopted, this presages an era of much faster block storage access.

Give me a P-M-E-M?

Rob Peglar, president of Advanced Computation and Storage LLC, tells us: "It's a realistic view. Such use of persistent memory to augment/enhance block access does presage a different era – than we're in right now, with SSDs."

Howard Marks, chief scientist at, said: "Is it realistic that Plexistor managing PMEM (DRAM, Optane DC or other) could deliver 1x µs latencies? Sure, but that would only be for 'cache hits' (it's not really a cache hence quotes) accessing data that's not in the PMEM tier will have 100µs latency.

"It could be set up with the local PMEM as the 'storage' tier with the external array for snaps, log (to recover from a node failure) etc, and have 1x µsec latency but that would limit database size to the size of the PMEM layer. With 512GB Optane DC that's 4TB to 6TB/2 socket node or so.

"That's also reasonable but only as a stopgap before moving to in-memory databases that manage the PMEM directly."

Both Peglar and Marks see PMEM caching/tiering as a stopgap before moving to fully in-memory databases with "memory" possibly meaning a combination of DRAM and PMEM.

Impact on other suppliers

What about other storage industry SAN suppliers of such super-fast storage? Would they have to employ some form of persistent memory caching in accessing client systems to match NetApp MAX Data speed?

Marks said: "Is this mainstreamable even in the 'go very fast' end of the market?  First it's Linux-only, that's where the HPC, high frequency trading, etc, runs, but that means it's stuck in that niche.

"The bigger question is how much demand is there for 10µsec latency in a 125µsec-is-normal world, and how fast do those applications move from storage-dependent databases like Mongo to in-memory databases like HANA or Aerospike?

"I think this gets NetApp bragging rights, new respect as a go-fast vendor, which they've never really been, and a foot in the door at new accounts but, two years after they start shipping, the market dries up as customers move to in-memory."

Peglar said: "Current SAN suppliers will, in all probability, begin (or complete) the integration of persistent memory into their architecture, most likely as a faster tier, and/or cache layer.  This will, by its nature, cause SAN suppliers to focus on host-based capability, through a combination of hardware and software, rather than strictly array-based capability, which will continue to evolve as NVMe and NVMe-oF continues to mature."

The use of host persistent memory is, for Peglar, a milestone on a longer journey to IO elimination: "Having said that, I look forward to further development of systems which actually help to, or completely eliminate IO, rather than just make it faster, i.e. with reduced latency, greater throughput, etc. by the use of persistent memory in true memory semantics, pure CPU load/store. This is the ultimate benefit of persistent memory."

PMEM caching/tiering adoption

The Register's storage desk expects other mainstream enterprise storage suppliers – such as Dell EMC, HPE, Hitachi and Pure Storage – to adopt client PMEM caching/tiering in their storage architectures.

NVMe-oF startups – such as Apeiron, E8, Excelero, Kaminario and Pavilion Data Systems – can also be expected to add client system PMEM acceleration into their development roadmaps.

Old wine bottles gathering dust

Intel hands first Optane DIMM to Google, where it'll collect dust until a supporting CPU arrives


It would not be a surprise for hyperscale service suppliers such as AWS, Azure, eBay, Facebook and the Google Cloud Platform to use the same architecture. Intel has ceremonially presented its first product Optane DIMM to Google.

We also see scope for its adoption by hyperconverged system vendors. The scope for accelerating virtual SAN access across a hyperconverged cluster using PMEM caching looks somewhat obvious.

Nutanix, after all, bought PernixData with its hypervisor caching technology and so, we would think, has a host caching technology mindset ready to be fired up.

El Reg predicts that client PMEM caching/tiering will spread across the storage industry like wildfire once persistent memory DIMM products become available and affordable.

A PMEM caching whirlwind is coming and suppliers who don't adopt this caching/tiering technology could be left in tears. ®

Similar topics

Other stories you might like

  • Demand for PC and smartphone chips drops 'like a rock' says CEO of China’s top chipmaker
    Markets outside China are doing better, but at home vendors have huge component stockpiles

    Demand for chips needed to make smartphones and PCs has dropped "like a rock" – but mostly in China, according to Zhao Haijun, the CEO of China's largest chipmaker Semiconductor Manufacturing International Corporation (SMIC).

    Speaking on the company's Q1 2022 earnings call last Friday, Zhao said smartphone makers currently have five months inventory to hand, so are working through that stockpile before ordering new product. Sales of PCs, consumer electronics and appliances are also in trouble, the CEO said, leaving some markets oversupplied with product for now. But unmet demand remains for silicon used for Wi-Fi 6, power conversion, green energy products, and analog-to-digital conversion.

    Zhao partly attributed sales slumps to the Ukraine war which has made the Russian market off limits to many vendors and effectively taken Ukraine's 44 million citizens out of the global market for non-essential purchases.

    Continue reading
  • Colocation consolidation: Analysts look at what's driving the feeding frenzy
    Sometimes a half-sized shipping container at the base of a cell tower is all you need

    Analysis Colocation facilities aren't just a place to drop a couple of servers anymore. Many are quickly becoming full-fledged infrastructure-as-a-service providers as they embrace new consumption-based models and place a stronger emphasis on networking and edge connectivity.

    But supporting the growing menagerie of value-added services takes a substantial footprint and an even larger customer base, a dynamic that's driven a wave of consolidation throughout the industry, analysts from Forrester Research and Gartner told The Register.

    "You can only provide those value-added services if you're big enough," Forrester research director Glenn O'Donnell said.

    Continue reading
  • D-Wave deploys first US-based Advantage quantum system
    For those that want to keep their data in the homeland

    Quantum computing outfit D-Wave Systems has announced availability of an Advantage quantum computer accessible via the cloud but physically located in the US, a key move for selling quantum services to American customers.

    D-Wave reported that the newly deployed system is the first of its Advantage line of quantum computers available via its Leap quantum cloud service that is physically located in the US, rather than operating out of D-Wave’s facilities in British Columbia.

    The new system is based at the University of Southern California, as part of the USC-Lockheed Martin Quantum Computing Center hosted at USC’s Information Sciences Institute, a factor that may encourage US organizations interested in evaluating quantum computing that are likely to want the assurance of accessing facilities based in the same country.

    Continue reading
  • Bosses using AI to hire candidates risk discriminating against disabled applicants
    US publishes technical guide to help organizations avoid violating Americans with Disabilities Act

    The Biden administration and Department of Justice have warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

    Under the ADA, employers must provide adequate accommodations to all qualified disabled job seekers so they can fairly take part in the application process. But the increasing rollout of machine learning algorithms by companies in their hiring processes opens new possibilities that can disadvantage candidates with disabilities. 

    The Equal Employment Opportunity Commission (EEOC) and the DoJ published a new document this week, providing technical guidance to ensure companies don't violate ADA when using AI technology for recruitment purposes.

    Continue reading
  • How ICE became a $2.8b domestic surveillance agency
    Your US tax dollars at work

    The US Immigration and Customs Enforcement (ICE) agency has spent about $2.8 billion over the past 14 years on a massive surveillance "dragnet" that uses big data and facial-recognition technology to secretly spy on most Americans, according to a report from Georgetown Law's Center on Privacy and Technology.

    The research took two years and included "hundreds" of Freedom of Information Act requests, along with reviews of ICE's contracting and procurement records. It details how ICE surveillance spending jumped from about $71 million annually in 2008 to about $388 million per year as of 2021. The network it has purchased with this $2.8 billion means that "ICE now operates as a domestic surveillance agency" and its methods cross "legal and ethical lines," the report concludes.

    ICE did not respond to The Register's request for comment.

    Continue reading
  • Fully automated AI networks less than 5 years away, reckons Juniper CEO
    You robot kids, get off my LAN

    AI will completely automate the network within five years, Juniper CEO Rami Rahim boasted during the company’s Global Summit this week.

    “I truly believe that just as there is this need today for a self-driving automobile, the future is around a self-driving network where humans literally have to do nothing,” he said. “It's probably weird for people to hear the CEO of a networking company say that… but that's exactly what we should be wishing for.”

    Rahim believes AI-driven automation is the latest phase in computer networking’s evolution, which began with the rise of TCP/IP and the internet, was accelerated by faster and more efficient silicon, and then made manageable by advances in software.

    Continue reading
  • Pictured: Sagittarius A*, the supermassive black hole at the center of the Milky Way
    We speak to scientists involved in historic first snap – and no, this isn't the M87*

    Astronomers have captured a clear image of the gigantic supermassive black hole at the center of our galaxy for the first time.

    Sagittarius A*, or Sgr A* for short, is 27,000 light-years from Earth. Scientists knew for a while there was a mysterious object in the constellation of Sagittarius emitting strong radio waves, though it wasn't really discovered until the 1970s. Although astronomers managed to characterize some of the object's properties, experts weren't quite sure what exactly they were looking at.

    Years later, in 2020, the Nobel Prize in physics was awarded to a pair of scientists, who mathematically proved the object must be a supermassive black hole. Now, their work has been experimentally verified in the form of the first-ever snap of Sgr A*, captured by more than 300 researchers working across 80 institutions in the Event Horizon Telescope Collaboration. 

    Continue reading
  • Shopping for malware: $260 gets you a password stealer. $90 for a crypto-miner...
    We take a look at low, low subscription prices – not that we want to give anyone any ideas

    A Tor-hidden website dubbed the Eternity Project is offering a toolkit of malware, including ransomware, worms, and – coming soon – distributed denial-of-service programs, at low prices.

    According to researchers at cyber-intelligence outfit Cyble, the Eternity site's operators also have a channel on Telegram, where they provide videos detailing features and functions of the Windows malware. Once bought, it's up to the buyer how victims' computers are infected; we'll leave that to your imagination.

    The Telegram channel has about 500 subscribers, Team Cyble documented this week. Once someone decides to purchase of one or more of Eternity's malware components, they have the option to customize the final binary executable for whatever crimes they want to commit.

    Continue reading

Biting the hand that feeds IT © 1998–2022