This article is more than 1 year old

CXL absorbs OpenCAPI on the road to interconnect dominance

It's not the first rival open standard has assimilated – remember Gen-Z?

Compute Express Link (CXL) is now set to become the standard high-performance interconnect for linking CPUs to devices and distributed memory as it is set to absorb rival OpenCAPI specification efforts.

CXL was initially developed by Intel, and following the formation of the CXL Consortium in 2019 to promote the standard, it garnered support from a broad range of industry players.

OpenCAPI, meanwhile, is based on technology that IBM developed to connect accelerators to its Power processors, with the OpenCAPI Consortium (OCC) formed in 2016 to spread the standard beyond Power chips, featuring in AMD’s Epyc server portfolio, for example.

Now, the OpenCAPI Consortium has announced that it has entered into an agreement with the CXL Consortium that – if approved by all parties – will see all OpenCAPI Consortium assets, including the OpenCAPI and OMI specifications, transferred to the CXL Consortium.

OMI, or Open Memory Interface, is an extension of OpenCAPI that supports low-latency memory operations.

This isn't the first rival that CXL has assimilated, either – earlier this year it signed an agreement with the Gen-Z Consortium to accept the transfer of the specifications and assets relating to the Gen-Z interconnect, which was designed with the aim of connecting storage-class memory technologies to CPUs.

With that agreement, Gen-Z effectively ceased to exist and its technology was folded into CXL, a move that is likely to be repeated with the absorption of OpenCAPI. This is not necessarily a bad thing, as it means the industry can now standardize on a single interconnect rather than having competing technologies.

CXL Consortium president Siamak Tavallaei said it was an opportunity to focus the industry on specifications under one organization.

"Assignment of OCC assets will allow for the CXL Consortium to freely utilize what OCC has already developed with OpenCAPI/OMI," Tavallaei said.

CXL, which is built on the PCIe 5.0 standard as its physical and electrical interface, is set to open up new architectural choices in the datacenter, as CXL 2.0 makes it possible for servers to connect to resources such as accelerators or memory sitting inside other nodes.

The first CXL-compatible systems are expected to launch later this year with the availability of Intel's Sapphire Rapids Xeon Scalable processors and AMD's Genoa fourth-generation Epyc chips.

In anticipation of these, memory and storage maker SK hynix this week unveiled its first DDR5 DRAM-based CXL memory modules, stating that it aims to mass-produce CXL memory products by 2023.

An illustration of an ecofriendly, green tinged data center

How CXL may change the datacenter as we know it

READ MORE

Instead of shipping as a DIMM, this comes in an EDSFF (Enterprise & Data Center Standard Form Factor) enclosure that is more commonly used for drives. It is a 96GB product built using SK hynix 24Gb DDR5 DRAM components, and connects to the outside world with a PCIe 5.0 x8 interface.

SK hynix declined to offer an availability date for its first CXL memory product, but Samsung announced its own 512GB CXL DRAM product back in May, which was set for evaluation and testing in Q3 of this year, with commercialization coming once the next-generation server platforms that support it are available. ®

More about

TIP US OFF

Send us news


Other stories you might like