Marvell CXL roadmap goes all-in on composable infrastructure

Chip biz bets interconnect tech will cement its cloud claim, one day


Fresh off the heels of Marvell Technology's Tanzanite acquisition, executives speaking at a JP Morgan event this week offered a glimpse at its compute express link (CXL) roadmap.

"This is the next growth factor, not only for Marvell storage, but Marvell as a whole," Dan Christman, EVP of Marvell's storage products group, said.

Introduced in early 2019, CXL is an open interface that piggybacks on PCIe to provide a common, cache-coherent means of connecting CPUs, memory, accelerators, and other peripherals. The technology is seen by many, including Marvell, as the holy grail of composable infrastructure, as it enables memory to be disaggregated from the processor.

The rough product roadmap presented by Marvell outlined a sweeping range of CXL products including memory extension modules and pooling tech, switching, CXL accelerators, and copper and electro-optical CXL fabrics for rack-level and datacenter-scale systems.

Those aren't SSDs

With the first generation of CXL-compatible CPUs from Intel and AMD slated for release this year, one of the first products on Marvell's roadmap is a line of memory expansion modules. These modules will be supplemental to the traditional DDR memory DIMMs and feature an integrated CXL controller rather than relying on the CPU's onboard memory controller.

"DRAM is the largest component spend in the entire datacenter. It's more than NAND flash. It's more than CPUs," Thad Omura, VP of marketing for Marvell's flash business unit, said, adding that, traditionally, achieving the high-memory densities necessary for memory-intensive workloads has required high-end CPUs with multiple memory controllers onboard.

With CXL, now "you can plug in as many modules as you need," he said.

Marvell plans to offer these CXL memory modules in a form factor similar to that used by NVMe SSDs today. In fact, because both the SSDs and CXL memory modules share a common PCIe electrical interface, they could be mixed and matched to achieve the desired ratio of memory and storage within a system.

Additionally, because the CXL controller functions as a standalone memory controller, systems builders and datacenter operators aren't limited to just DDR4 or DDR5 memory.

"Maybe you want to use DDR4 because it's a cheaper memory technology, but your server's CPU only supports the latest DDR5 controller," Omura said. "Now you can plug those DDR4 modules directly into the front" of the system.

The modules' onboard controllers also have performance implications by enabling customers to achieve the desired memory density without resorting to a two-DIMM-per-channel configuration, Omura claims.

While Marvell didn't commit to a specific timeline for bringing its first generation of CXL products to market, it did say it was aligning with the major server platform launches, including Intel's Sapphire Rapids and AMD's Genoa Epyc processor families later this year.

"We're really just at the beginning stages of CXL solutions going to market. Server platforms that support CXL are just starting to emerge, and the CXL solutions that follow will need to prove the value proposition and also be qualified in the systems," Omura said.

A true composable future remains years off

In fact, many of the products on Marvell's CXL roadmap are dependent on the availability of compatible microprocessors.

While the CXL 2.0 spec required for many of the technology's more advanced use cases — including composable infrastructure — has been around for more than a year, compatible CPUs from Intel and AMD aren't expected to launch until 2023 at the earliest.

These technologies include memory pooling and switching, which will enable datacenter operators to consolidate large quantities of memory into a single, centralized appliance that can be accessed by multiple servers simultaneously. "This is a tremendous value for hyperscalers looking to really optimize DRAM utilization," Omura argued.

At this stage, Marvell believes chipmakers may begin offering CPUs that forego the onboard memory controllers and instead interface directly with a CXL switch for memory, storage, and connectivity to accelerators like DPUs and GPUs.

"The resources will be able to scale completely independently," Omura said.

With CXL 2.0, Marvell also plans to integrate its portfolio of general compute and domain-specific engines directly into the CXL controller.

For example, these CXL accelerators could be used to operate on data directly on a memory expansion module to accelerate analytics, machine learning, and complex search workloads, Omura said.

Beyond the rack

For now, much of the chipmaker's CXL roadmap is limited to node and rack-level communications. But with the introduction of the CXL 3.0 spec later this year, Marvell expects this to change.

Last year, Gen-Z donated its coherent-memory fabric assets to the CXL Consortium. This kind of fabric connectivity will be key to extending the technology beyond the rack level to the rest of the datacenter.

"The rack architecture of the future will fully utilize CXL as a low-latency fabric," Omura said. "You'll have completely disaggregated resources that you can instantly compose to meet your workload needs at the click of a button."

To achieve this goal, Marvell plans to use its investments in copper serializer/deserializers and the electro-optical interface technology acquired from Inphi in 2020 to extend CXL fabrics across long distances.

"We're in a great position to leverage our electro-optics leadership technology to ensure CXL has the best possible distance, latency, cost, and performance over fiber connectivity," he said. "We absolutely believe this represents a multi-billion dollar opportunity."

Eventually, Marvell says all compute, storage, and memory resources will be disaggregated and composed across multiple racks on the fly over a CXL fabric. ®


Other stories you might like

  • PCIe 7.0 pegged to arrive in 2025 with speeds of 512 GBps
    Although PCIe 5.0 is just coming to market, here's what we can expect in the years ahead

    Early details of the specifications for PCIe 7.0 are out, and it's expected to deliver data rates of up to 512 GBps bi-directionally for data-intensive applications such as 800G Ethernet.

    The announcement from the The Peripheral Component Interconnect Special Interest Group (PCI SIG) was made to coincide with its Developers Conference 2022, held at the Santa Clara Convention Center in California this week. It also marks the 30th anniversary of the PCI-SIG itself.

    While the completed specifications for PCIe 6.0 were only released this January, PCIe 7.0 looks to double the bandwidth of the high-speed interconnect yet again from a raw bit rate of 64 GTps to 128 GTps, and bi-directional speeds of up to 512 GBps in a x16 configuration.

    Continue reading
  • Will cloud giants really drive colos off a financial cliff?
    The dude who predicted the Enron collapse bets they will

    Analysis Jim Chanos, the infamous short-seller who predicted Enron's downfall, has said he plans to short datacenter real-estate investment trusts (REIT).

    "This is our big short right now," Chanos told the Financial Times. "The story is that, although the cloud is growing, the cloud is their enemy, not their business. Value is accrued to the cloud companies, not the bricks-and-mortar legacy datacenters."

    However, Chanos's premise that these datacenter REITs are overvalued and at risk of being eaten alive by their biggest customers appears to overlook several important factors. For one, we're coming out of a pandemic-fueled supply chain crisis in which customers were willing to pay just about anything to get the gear they needed, even if it meant waiting six months to a year to get it.

    Continue reading
  • DRAM prices to drop 3-8% due to Ukraine war, inflation
    Wait, we’ll explain

    As the world continues to grapple with unrelenting inflation for many products and services, the trend of rising prices is expected to have the opposite impact on memory chips for PCs, servers, smartphones, graphics processors, and other devices.

    Taiwanese research firm TrendForce said Monday that DRAM pricing for commercial buyers is forecast to drop around three to eight percent across those markets in the third quarter compared to the previous three months. Even prices for DDR5 modules in the PC market could drop as much as five percent from July to September.

    This could result in DRAM buyers, such as system vendors and distributors, reducing prices for end users if they hope to stimulate demand in markets like PC and smartphones where sales have waned. We suppose they could try to profit on the decreased memory prices, but with many people tightening their budgets, we hope this won't be the case.

    Continue reading
  • Having trouble finding power supplies or server racks? You're not the only one
    Hyperscalers hog the good stuff

    Power and thermal management equipment essential to building datacenters is in short supply, with delays of months on shipments – a situation that's likely to persist well into 2023, Dell'Oro Group reports.

    The analyst firm's latest datacenter physical infrastructure report – which tracks an array of basic but essential components such as uninterruptible power supplies (UPS), thermal management systems, IT racks, and power distribution units – found that manufacturers' shipments accounted for just one to two percent of datacenter physical infrastructure revenue growth during the first quarter.

    "Unit shipments, for the most part, were flat to low single-digit growth," Dell'Oro analyst Lucas Beran told The Register.

    Continue reading
  • FedEx signals 'zero mainframe, zero datacenter' operations by 2024
    Going completely cloud-native will save it $400m a year, CIO estimates

    The datacenter is dead – at least according to FedEx, which announced plans to close its server farms and transition completely to the cloud, where it hopes to save an estimated $400 million annually.

    At FedEx's investor relations day held last week, CIO Rob Carter said FedEx had long been a leader in technology, claiming the company was first to introduce tracking, handheld computers and automated package sorting. The next big movement in tech, Carter went on to say, is migrating all of its systems to the cloud.

    "We've been working across this decade to simplify and streamline our technology and systems to create value all along the way by improving productivity, security and reliability," Carter said on the call.

    Continue reading
  • Is a lack of standards holding immersion cooling back?
    There are just so many ways to deep fry your chips these days

    Comment Liquid and immersion cooling have undergone something of a renaissance in the datacenter in recent years as components have grown ever hotter.

    This trend has only accelerated over the past few months as we’ve seen a fervor of innovation and development around everything from liquid-cooled servers and components for vendors that believe the only way to cool these systems long term is to drench them in a vat of refrigerants.

    Liquid and immersion cooling are by no means new technologies. They’ve had a storied history in the high-performance computing space, in systems like HPE’s Apollo, Cray, and Lenovo’s Neptune to name just a handful.

    Continue reading
  • Datacenter operator Switch hit with claims it misled investors over $11b buyout
    Complainants say financial projections were not disclosed, rendering SEC filing false and misleading

    Datacenter operator Switch Inc is being sued by investors over claims that it did not disclose key financial details when pursuing an $11 billion deal with DigitalBridge Group and IFM Investors that will see the company taken into private ownership if it goes ahead.

    Two separate cases have been filed this week by shareholders Marc Waterman and Denise Redfield in the Federal Court in New York. The filings contain very similar claims that a proxy statement filed by Switch with the US Securities and Exchange Commission (SEC) in regard to the proposed deal omitted material information regarding Switch's financial projections.

    Both Redfield and Waterman have asked the Federal Court to put the deal on hold, or to undo it in the event that Switch manages in the meantime to close the transaction, and to order Switch to issue a new proxy statement that sets out all the relevant material information.

    Continue reading
  • Iceotope: No need to switch servers to swap air-cooled for liquid-cooled
    Standard datacenter kit just needs a few tweaks, like pulling off the fans

    Liquid cooling specialist Iceotope claims its latest system allows customers to easily convert existing air-cooled servers to use its liquid cooling with just a few minor modifications.

    Iceotope’s Ku:l Data Center chassis-level cooling technology has been developed in partnership with Intel and HPE, the company said, when it debuted the tech this week at HPE’s Discover 2022 conference in Las Vegas. The companies claim it delivers energy savings and a boost in performance.

    According to Iceotope, the sealed liquid-cooled chassis enclosure used with Ku:l Data Center allows users to convert off-the-shelf air-cooled servers to liquid-cooled systems with a few small modifications, such as removing the fans.

    Continue reading
  • Chinese startup hires chip godfather and TSMC vet to break into DRAM biz
    They're putting a crew together, and Beijing's tossed in $750m to get things started

    A Chinese state-backed startup has hired legendary Japanese chip exec Yukio Sakamoto as part of a strategy to launch a local DRAM industry.

    Chinese press last week reported that Sakamoto has joined an outfit named SwaySure, also known as Shenzhen Sheng Weixu Technology Company or Sheng Weixu for brevity.

    Sakamoto's last gig was as senior vice president of Chinese company Tsinghua Unigroup, where he was hired to build up a 100-employee team in Japan with the aim of making DRAM products in Chongqing, China. That effort reportedly faced challenges along the way – some related to US sanctions, others from recruitment.

    Continue reading
  • Will optics ever replace copper interconnects? We asked this silicon photonics startup
    Star Trek's glowing circuit boards may not be so crazy

    Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

    But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

    Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

    Continue reading

Biting the hand that feeds IT © 1998–2022