This article is more than 1 year old

Why Intel killed its Optane memory business

Effort to create a new tier of memory flopped as rivals offered faster and more open alternatives

Analysis Intel CEO Pat Gelsinger has confirmed that Intel will quit its Optane business, ending its attempt to create and promote a tier of memory that's a little slower than RAM but had the virtues of persistence and high IOPS.

The news should not, however, come as a surprise. The division has been on life support for some time following Micron's 2018 decision to terminate its joint venture with Intel, selling the fab in which the 3D XPoint chips that go into Optane drives and modules were made. While Intel has signaled it is open to using third-party foundries, without the means to make its own Optane silicon, the writing was on the wall.

As our sister site Blocks and Files reported in May, the sale only came after Micron had saddled Intel with a glut of 3D XPoint memory modules – more than the chipmaker could sell. Estimates put Intel's inventories at roughly two years worth of supply.

In its poor earnings report for Q2, Intel said quitting Optane will result in a $559 million inventory impairment. In other words, the company is giving up on the project and writing off the inventory as a loss.

The deal also signals the end of Intel's SSD business. Two years ago Intel sold its NAND flash business and manufacturing plans to SK hynix to focus its efforts on the Optane business.

Announced in 2015, 3D XPoint memory arrived in the form of Intel's Optane SSDs two years later. However, unlike SSDs from rivals, Optane SSDs couldn't compete on capacity or speed. The devices instead offered some of the strongest I/O performance on the market – a quality that made them particularly attractive in latency sensitive applications where sheer IOPS were more important than throughput. Intel claimed its PCIe 4.0-based P5800X SSDs could reach up to 1.6 million IOPS

Intel also used 3D XPoint in its Optane persistent memory DIMMs, particularly around the launch of its second- and third-gen Xeon Scalable processors.

From a distance, Intel's Optane DIMMs looked no different than your run-of-the-mill DDR4, apart from, maybe, as a heat spreader. However, on closer inspection the DIMMs could be had in capacities far greater than is possible with DDR4 memory today. Capacities of 512GB per DIMM weren't uncommon.

The DIMMs slotted in alongside standard DDR4 and enabled a number of novel use cases, including a tiered memory architecture that was essentially transparent to the operating system software. When deployed in this fashion, the DDR memory was treated as a large level-4 cache, with the Optane memory behaving as system memory.

While offering nowhere near the performance of DRAM, the approach enabled the deployment of very large, memory-intensive workloads, like databases, at a fraction of the cost of an equivalent amount of DDR4, without requiring software customization. That was the idea, anyway.

Optane DIMMS could also be configured to behave as a high-performance storage device or a combination of storage and memory.

What now?

While DDR5 promises to address some of the capacity challenges that Optane persistent memory solved, with DIMM capacities of 512GB planned, it’s unlikely to be price competitive.

DDR isn't getting cheaper – at least not quickly – but NAND flash prices are plummeting as supply outpaces demand. All the while, SSDs are getting faster in a hurry.

Micron this week began volume production of a 232-layer module that will push consumer SSDs into 10+ GB/sec territory. That's still not fast or low latency enough to replace Optane for large in-memory workloads, analysts tell The Register, but it's getting awfully close to the 17GB/sec offered by a single channel of low-end DDR4.

So if NAND isn't the answer, then what? Well, there's actually an alternative to Optane memory on the horizon. It's called compute express link (CXL) and Intel is already heavily invested in the technology. Introduced in 2019, CXL defines a cache-coherent interface for connecting CPUs, memory, accelerators, and other peripherals.

CXL 1.1, which will ship alongside Intel's long-delayed Sapphire Rapids Xeon Scalable and AMD's fourth-gen Eypc Genoa and Bergamo processors later this year, enables memory to be attached directly to the CPU over the PCIe 5.0 link.

Vendors including Samsung and Marvell are already planning memory expansion modules that slot into PCIe like GPU and provide a large pool of additional capacity for memory intensive workloads.

Marvell’s Tanzanite acquisition this spring will allow the vendor to offer Optane-like tiered memory functionality as well.

What's more, because the memory is managed by a CXL controller on the expansion card, older and cheaper DDR4 or even DDR3 modules could be used alongside modern DDR5 DIMMs. In this regard, the CXL-based memory tiering could be superior as it doesn't rely on a specialized memory architecture like 3D XPoint.

VMware is pondering software-defined memory that shares memory from one server to other boxes – an effort that will be far more potent if it uses a standard like CXL.

However, emulating some aspects of Intel's Optane persistent memory may have to wait until the first CXL 2.0-compatible CPUs – which will add support for memory pooling and switching – come to market. It also remains to be seen how software interacts with CXL memory modules in tiered memory applications. ®

More about

TIP US OFF

Send us news


Other stories you might like