Will optics ever replace copper interconnects? We asked this silicon photonics startup

Star Trek's glowing circuit boards may not be so crazy


Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

If this sounds familiar, these are the same challenges that drove telecommunication giants like Bell to replace thousands of miles of copper telephone cables with fiber optics in the 1970s.

As a general rule, the higher the bandwidth, the shorter the distance it can travel without the assistance of amplifiers or repeaters to extend the reach at the expense of latency. And this is hardly unique to telecommunications networks.

The same laws of physics apply to interconnect technologies like PCIe. As it doubles its effective bandwidth with each subsequent generation, the physical distance the signal can travel shrinks.

“In a lot of cases, long distances are now defined as anything more than a few meters,” Wade said. “As the PCIe bandwidths are going higher and higher, you can no longer escape the server board without putting a retimer on the board” to boost the signal.

“Even if you can get the bandwidth from point A to point B, the question is with how much power and with how much latency,” he adds.

Ayar Labs takes optical I/O to the centimeter scale

This is exactly the problem that Ayar Labs is trying to solve. The silicon photonics startup has developed a chiplet that takes electrical signals from chips and converts them into a high-bandwidth optical signal.

And because the technology uses chiplet architecture, it’s intended to be packaged alongside compute tiles from other chipmakers using open standards like the Universal Chiplet Interconnect Express (UCI-express), which is currently in development.

The underlying technology has helped the company raise nearly $200 million from tech giants like Intel and Nvidia, and secure several high-profile partnerships, including one to bring optical I/O capabilities to Hewlett Packard Enterprise’s high-performance Slingshot interconnect fabric.

Near-term applications

While Wade firmly believes that optical communication at the system level is inevitable, he notes there are several applications for optical interconnects in the near term. These include high-performance computing and composable infrastructure.

“Our claim is that the electrical I/O problem is going to become so severe that computing applications are going to start to get throttled by their ability to shift bandwidth around,” he said. “For us, that's AI and machine learning scale out.”

These HPC environments often require specialized interconnect technologies to avoid bottlenecks. Nvidia’s NVLink is one example. It enables high-speed communication between up to four GPUs.

Another area of opportunity for optical I/O, Wade says, is the kind of rack-level composable infrastructure promised by Compute Express Link’s (CXL) latest specs.

CXL defines a common, cache-coherent interface based on PCIe for interconnecting CPUs, memory, accelerators, and other peripherals

The CXL 1.0 and CXL 2.0 specs promise to unlock a variety of memory pooling and tiered memory functionality. However, the open standard’s third iteration, expected to be ratified later this year, will extend these capabilities beyond the rack level.

It’s at this stage of CXL’s development that Wade says optical’s advantages will be on full display.

“Even at the CXL 2.0 level, you're very limited to the degree in which you can scale out, because the moment you hit something like a retimer, you start to incur latencies,” that make memory pooling impractical, he said.

However, for at least the first generation of CXL products, Wade expects most, if not all, will be electrical. “There's a lot of software stack work that has to get done to really enable these kind of disaggregated systems” before CXL will be ready for optical I/O, he said.

But as the applications for optical I/O become more prevalent, Wade predicts the supply chain economics will make the technology even more attractive from a cost perspective. “It's our belief that we're gonna see an optical I/O transformation start to hit throughout almost every market vertical that's building computing systems.”

Challenges aplenty

Of course, getting there won’t be without its challenges, and one of the biggest is convincing customers the technology is not only more performant and economically viable but mature enough for production environments.

This is specifically why Ayar Labs is focused on optical interconnects as opposed to co-packaged optics. One of the reasons that co-packaged optics haven’t taken off is their splash radius in the event of failure is significantly larger. If the optics fail on a co-packaged optical switch, the entire appliance goes down. And many of these same concerns apply to optical I/O.

“Whenever you have a heavily commoditized, standardized, risk-averse application space, that is not a place to try to deploy a new technology,” Wade said. However, “if you have a high-value application that highly benefits from increases in hardware performance, then you're obviously going to take more risk.”

By focusing its attention on HPC environments, Ayar believes it can refine its designs and establish a supply chain for components, all while racking up the substantial field-operating hours necessary to sell to more mainstream markets.

Sci-Fi optical computers still more than a decade away

For customers that are ready and willing to risk deploying nascent technologies, optical I/O is already here.

“The customer that we're delivering to right now has already replaced their board-level links with our optical I/O,” Wade said. “Every socket-to-socket link is an optical I/O link, and that's even at the board level.”

As the technology matures, the question then becomes whether the optical waveguides will ever get integrated into the PCB — ala Star Trek.

“Will we see the optical waveguides getting integrated into the boards? I do think we’ll see some of that emerge actually within the next decade,” he said. “As the volume of optical I/O solutions start to get massive, it’ll make it more attractive for some of these solutions.”

Once you start shrinking beyond the board level, the future of optical I/O gets a bit murkier. The next logical step, Wade says, would be using optics to connect the individual dies that make up the chip.

However, he doesn’t expect this to happen anytime soon. “As you go into the millimeter scale, electrical I/O has, I think, a healthy roadmap in front of it,” he said. “Beyond 10-15 years, we might see… optical communication start to enter the millimeter scale regime.” ®

Broader topics


Other stories you might like

  • Lenovo reveals small but mighty desktop workstation
    ThinkStation P360 Ultra packs latest Intel Core processor, Nvidia RTX A5000 GPU, support for eight monitors

    Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.

    Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.

    Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.

    Continue reading
  • PCIe 7.0 pegged to arrive in 2025 with speeds of 512 GBps
    Although PCIe 5.0 is just coming to market, here's what we can expect in the years ahead

    Early details of the specifications for PCIe 7.0 are out, and it's expected to deliver data rates of up to 512 GBps bi-directionally for data-intensive applications such as 800G Ethernet.

    The announcement from the The Peripheral Component Interconnect Special Interest Group (PCI SIG) was made to coincide with its Developers Conference 2022, held at the Santa Clara Convention Center in California this week. It also marks the 30th anniversary of the PCI-SIG itself.

    While the completed specifications for PCIe 6.0 were only released this January, PCIe 7.0 looks to double the bandwidth of the high-speed interconnect yet again from a raw bit rate of 64 GTps to 128 GTps, and bi-directional speeds of up to 512 GBps in a x16 configuration.

    Continue reading
  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • Intel is running rings around AMD and Arm at the edge
    What will it take to loosen the x86 giant's edge stranglehold?

    Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.

    So where are all the AMD and Arm-based edge appliances?

    A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.

    Continue reading
  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • TSMC may surpass Intel in quarterly revenue for first time
    Fab frenemies: x86 giant set to give Taiwanese chipmaker more money as it revitalizes foundry business

    In yet another sign of how fortunes have changed in the semiconductor industry, Taiwanese foundry giant TSMC is expected to surpass Intel in quarterly revenue for the first time.

    Wall Street analysts estimate TSMC will grow second-quarter revenue 43 percent quarter-over-quarter to $18.1 billion. Intel, on the other hand, is expected to see sales decline 2 percent sequentially to $17.98 billion in the same period, according to estimates collected by Yahoo Finance.

    The potential for TSMC to surpass Intel in quarterly revenue is indicative of how demand has grown for contract chip manufacturing, fueled by companies like Qualcomm, Nvidia, AMD, and Apple who design their own chips and outsource manufacturing to foundries like TSMC.

    Continue reading
  • Intel withholds Ohio fab ceremony over US chip subsidies inaction
    $20b factory construction start date unchanged – but the x86 giant is not happy

    Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.

    The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.

    That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.

    Continue reading
  • Intel demands $625m in interest from Europe on overturned antitrust fine
    Chip giant still salty

    Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.

    In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.

    According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."

    Continue reading
  • Having trouble finding power supplies or server racks? You're not the only one
    Hyperscalers hog the good stuff

    Power and thermal management equipment essential to building datacenters is in short supply, with delays of months on shipments – a situation that's likely to persist well into 2023, Dell'Oro Group reports.

    The analyst firm's latest datacenter physical infrastructure report – which tracks an array of basic but essential components such as uninterruptible power supplies (UPS), thermal management systems, IT racks, and power distribution units – found that manufacturers' shipments accounted for just one to two percent of datacenter physical infrastructure revenue growth during the first quarter.

    "Unit shipments, for the most part, were flat to low single-digit growth," Dell'Oro analyst Lucas Beran told The Register.

    Continue reading
  • Intel demos multi-wavelength laser array integrated on silicon wafer
    Next stop – on-chip optical interconnects?

    Intel is claiming a significant advancement in its photonics research with an eight-wavelength laser array that is integrated on a silicon wafer, marking another step on the road to on-chip optical interconnects.

    This development from Intel Labs will enable the production of an optical source with the required performance for future high-volume applications, the chip giant claimed. These include co-packaged optics, where the optical components are combined in the same chip package as other components such as network switch silicon, and optical interconnects between processors.

    According to Intel Labs, its demonstration laser array was built using the company's "300-millimetre silicon photonics manufacturing process," which is already used to make optical transceivers, paving the way for high-volume manufacturing in future. The eight-wavelength array uses distributed feedback (DFB) laser diodes, which apparently refers to the use of a periodically structured element or diffraction grating inside the laser to generate a single frequency output.

    Continue reading

Biting the hand that feeds IT © 1998–2022