This article is more than 1 year old
d-Matrix gets $44m in quest for efficient AI server chiplets
d-ream team or a d-isaster? Let's see what Microsoft's money can do
Another day, another US chip startup raising tens of millions of dollars.
This time, it's one you likely haven't heard of yet: d-Matrix, which has raised $44 million from Microsoft, SK Hynix, Marvell Technology, and others for a new kind of efficient AI chiplet design for datacenter servers.
The Santa Clara-based startup disclosed its funding and initial roadmap last week, saying it will use an "innovative digital in-memory computing" architecture to build processors made of chiplets that offer what it claims will be faster AI inference performance than CPUs and GPUs for large transformer models.
As such, d-Matrix is seeking to win business with so-called hyperscalers like Meta and Alphabet, which rely on transformer models with millions and billions of parameters to power popular applications, and become the latest recipient of the billions flowing into chip funding.
d-Matrix is looking at use cases in cloud computing like recommendation, text classification, social media analysis, search, and content moderation. The startup is also hoping to pack edge datacenters with its chip, which it said provides a major advantage in compute efficiency thanks to its digital in-memory computing architecture.
For d-Matrix, the edge opportunity spans both regular enterprises, with targeted use cases like chatbots and document processing, and 5G networks, with use cases like voice-enabled search and "metaverse AI."
"The hyperscale and edge datacenter markets are approaching performance and power limits, and it's clear that a breakthrough in AI compute efficiency is needed to match the exponentially growing market," said Sasha Ostojic, venture partner at venture capital firm Playground Global, which led the funding round along other firms Nautilus Venture Partners and Entrada Ventures.
Ostojic said the startup's other differentiation comes from its software, which d-Matrix markets on its website as "open, simplistic, scalable and frictionless for ease of adoption." Its software capabilities include the ability for users to "seamlessly map existing trained models" to its hardware.
"d-Matrix is a novel, defensible technology that can outperform traditional CPUs and GPUs, unlocking and maximizing power efficiency and utilization through their software stack," Ostojic added.
The startup's initial roadmap consists of its first silicon chiplet, code-named Nighthawk, and a follow-up called Jayhawk, which it said will be released "soon." The investor funding will help d-Matrix build out this roadmap and hire more people for its current team of 50.
- MegaChips takes aim at edge AI in US with ASIC program
- Samsung, others test drive Esperanto's 1,000-core RISC-V AI chip
- RISC-V keeps its head down amid global chip war
- Intel joins RISC-V governing body, pledges $1bn fund for chip designers
By using a chiplet design, d-Matrix said it can integrate multiple programming engines together in a Lego-like fashion in a single package. The startup is fitting these chiplets together using an advanced packaging technology it calls the Hetero-Modular organic package, which it said "enables chiplet heterogeneity and scalability, while being readily available and cost-effective."
"d-Matrix has been on a three-year journey to build the world's most efficient computing platform for AI inference at scale," said Sid Sheth, co-founder and CEO of d-Matrix. "We've developed a path-breaking compute architecture that is all-digital, making it practical to implement while advancing AI compute efficiency far past the memory wall it has hit today."
Both Sheth and his fellow co-founder, Sudeep Bhoja, have a respectable track record in the semiconductor industry. The two veteran engineers were previously executives at high-speed interconnect maker Inphi Corporation, which in 2020 was acquired in a $10 billion deal by Marvell Technology, one of the initial investors for d-Matrix.
Prior to that, Sheth was director of marketing for network connectivity at Broadcom, which he joined via the company's 2012 acquisition of NetLogic Microsystems. He was an engineer in the Pentium III processor group and a networking group at Intel in the years before that.
Bhoja also worked at Broadcom, serving as a technical director of high-speed interconnects. Before that, he was chief architect at optical networking startup Big Bear Networks, which was acquired in 2005 by Finisar Corporation, now known as II-VI Inc. He also worked at Lucent Technologies and Texas Instruments.
d-Matrix is entering a bit of a crowded market dominated by Nvidia, so it has a lot to prove before making serious headway. Another inference chip startup, Esperanto Technologies, only just started sampling its silicon with companies after getting started in 2014. ®