Nvidia, Apple noticeably absent from Intel-led chiplet interconnect collaboration

Party invitation lost – or snubbed?

Nvidia's absence from an Intel-led effort among industry players to develop next-gen chips with a more vibrant mix of cores is raising questions about the GPU maker's chiplet strategy, particularly regarding the integration of graphics when it comes to future PC processors.

Essentially, Intel and its friends this week launched a collaborative effort to foster and push a common-language interconnect between dies of CPU and GPU cores, AI engines, hardware accelerators, and other blocks.

This technology has been dubbed Universal Chiplet Interconnect Express, or UCIe, and yes, it has parallels with PCIe.

It's hoped this interconnect will allow these dies to seamlessly communicate with each other inside a single package. The idea being that you can – for instance – design your own custom die with some special acceleration on it, and then drop it into a package alongside compatible dies of someone else's CPU cores and acceleration units, and so on, and have the whole thing manufactured as a single processor. With a common die-to-die interconnect, the difficulty of engineering this is reduced.

These dies are also known as chiplets or compute tiles, and can be laid out flat (2D) or stacked (3D) as needed. Arranging a processor as a set of tiles in this way may bring benefits over integrating it all in one die, as a traditional system-on-chip, in terms of power, bandwidth, space, and more.

This approach suits Intel, for one, because it is keen to fab these kinds of chips for customers. AMD has, by the way, used chiplets for years in its Zen families of processors: it has TSMC manufacture AMD-designed dies and place them in single packages.

The UCIe working group is a who's who of chip and hardware companies as backers, including AMD, Arm, Google, Meta, Microsoft, Qualcomm, Samsung, and TSMC. Apple and Nvidia were the notable names missing.

Intel is spending billions of dollars to establish factories using 3D packaging to make chips. Intel is also aligning its CPU, GPU, and accelerator releases with its aggressive manufacturing roadmap, which calls for four new nodes by 2025.

To keep its factories busy, Intel is adopting a "multi-ISA" strategy, where it's opening up its assembly lines to make components powered by Arm and RISC-V core architectures. It is also licensing x86 cores, which can be packaged alongside dies of Arm and RISC-V compute tiles in a custom chip. UCIe will help those cores work in concert.

"The chiplet ecosystem created by UCIe is a critical step in the creation of unified standards for interoperable chiplets, which will ultimately allow for the next generation of technological innovations," Intel said in a statement.

Nvidia isn't backing the UCIe interconnect at its launch, and that raises questions on how it'll develop GPU compute tiles to co-exist with x86 CPUs in future chips, especially for PCs.

Iffy about chiplets?

"Nvidia has a preference for really large monolithic die," Kevin Krewell, principal analyst at Tirias Research, told The Register. "I think it's because they have experience building big dies and it gives them a differentiation. Nvidia has had research projects with chiplets, but have not committed to a production strategy – to date."

The idea of GPU compute tiles was floated by Intel last month as a way to blur the lines between integrated and discrete GPUs. It's upcoming Alchemist GPU, code-named Battlemage, will be integrated as a tile alongside other chiplets containing CPU cores and support circuitry in a chip code-named Meteor Lake.

"Nvidia likes to build larger GPUs that will not fit in a normal package with a x86 CPU. There's still an issue with what is the right size CPU and the right size GPU in one package. If anything, Nvidia will integrate an Arm CPU," Krewell said.

Intel, which recently released its first discrete GPU, is well positioned for this chiplet approach as it designs and fabs its processors. Nvidia didn't respond to requests for comment from The Register about not being involved in UCIe, and has previously declined to comment on its compute tile strategy.

Plowing the Bluefield

Nvidia's divergent approach to chip design is highlighted with its Bluefield chip, which links up Arm CPU cores, its homegrown GPUs, and Mellanox networking tech, in one package.

The biz failed to close a mega-deal to buy Arm, though CEO Jensen Huang addressed the company's three-chip strategy of CPUs, GPUs, and DPUs like Bluefield, on a recent earnings call.

Nvidia has a 20-year architectural license from Arm, which grants Nvidia "the full breadth and flexibility of options across technologies and markets" to deliver on its three-chip strategy, Huang said.

The GPU giant is on track to launch its Arm-based Grace processor, targeting giant AI and HPC workloads, in the first half of next year, Huang said, and later adding that we should expect "a lot of CPU development around the Arm architecture."

At the same time, Huang said "whether x86 or Arm, we will use the best CPU for the job, and together with partners in the computer industry, offer the world's best computing platform to tackle the impactful challenges of our time."

Intel hopes a collaborative approach through efforts – one of which is UCIe – will dent Nvidia's strong position in the markets of graphics, supercomputing, and artificial intelligence. Intel has specifically pointed out that Nvidia's closed approach with platforms, such as Omniverse, can't compete against emerging opportunities like the metaverse.

"They aim to eat into the ecosystem. While their closed proprietary approach may have some short-term benefits. We don't believe a closed approach is scalable in the long run for this large an opportunity," said Raja Koduri, vice president and general manager of the Accelerated Computing Systems and Graphics Group at Intel, during the company's investor conference last month. ®

Updated to add

"We welcome industry-standard methods to connect accelerated computing technologies to CPUs," Nvidia spokesman Ken Brown told The Register.

Other stories you might like

  • Despite 'key' partnership with AWS, Meta taps up Microsoft Azure for AI work
    Someone got Zuck'd

    Meta’s AI business unit set up shop in Microsoft Azure this week and announced a strategic partnership it says will advance PyTorch development on the public cloud.

    The deal [PDF] will see Mark Zuckerberg’s umbrella company deploy machine-learning workloads on thousands of Nvidia GPUs running in Azure. While a win for Microsoft, the partnership calls in to question just how strong Meta’s commitment to Amazon Web Services (AWS) really is.

    Back in those long-gone days of December, Meta named AWS as its “key long-term strategic cloud provider." As part of that, Meta promised that if it bought any companies that used AWS, it would continue to support their use of Amazon's cloud, rather than force them off into its own private datacenters. The pact also included a vow to expand Meta’s consumption of Amazon’s cloud-based compute, storage, database, and security services.

    Continue reading
  • Atos pushes out HPC cloud services based on Nimbix tech
    Moore's Law got you down? Throw everything at the problem! Quantum, AI, cloud...

    IT services biz Atos has introduced a suite of cloud-based high-performance computing (HPC) services, based around technology gained from its purchase of cloud provider Nimbix last year.

    The Nimbix Supercomputing Suite is described by Atos as a set of flexible and secure HPC solutions available as a service. It includes access to HPC, AI, and quantum computing resources, according to the services company.

    In addition to the existing Nimbix HPC products, the updated portfolio includes a new federated supercomputing-as-a-service platform and a dedicated bare-metal service based on Atos BullSequana supercomputer hardware.

    Continue reading
  • In record year for vulnerabilities, Microsoft actually had fewer
    Occasional gaping hole and overprivileged users still blight the Beast of Redmond

    Despite a record number of publicly disclosed security flaws in 2021, Microsoft managed to improve its stats, according to research from BeyondTrust.

    Figures from the National Vulnerability Database (NVD) of the US National Institute of Standards and Technology (NIST) show last year broke all records for security vulnerabilities. By December, according to pentester Redscan, 18,439 were recorded. That's an average of more than 50 flaws a day.

    However just 1,212 vulnerabilities were reported in Microsoft products last year, said BeyondTrust, a 5 percent drop on the previous year. In addition, critical vulnerabilities in the software (those with a CVSS score of 9 or more) plunged 47 percent, with the drop in Windows Server specifically down 50 percent. There was bad news for Internet Explorer and Edge vulnerabilities, though: they were up 280 percent on the prior year, with 349 flaws spotted in 2021.

    Continue reading
  • ServiceNow takes aim at procurement pain points
    Purchasing teams are a bit like help desks – always being asked to answer dumb or inappropriate questions

    ServiceNow's efforts to expand into more industries will soon include a Procurement Service Management product.

    This is not a dedicated application – ServiceNow has occasionally flirted with templates for its platform that come very close to being apps. Instead it stays close to the company's core of providing workflows that put the right jobs in the right hands, and make sure they get done. In this case, it will do so by tickling ERP and dedicated procurement applications, using tech ServiceNow acquired along with a company called Gekkobrain in 2021.

    The company believes it can play to its strengths with procurements via a single, centralized buying team.

    Continue reading
  • HPE, Cerebras build AI supercomputer for scientific research
    Wafer madness hits the LRZ in HPE Superdome supercomputer wrapper

    HPE and Cerebras Systems have built a new AI supercomputer in Munich, Germany, pairing a HPE Superdome Flex with the AI accelerator technology from Cerebras for use by the scientific and engineering community.

    The new system, created for the Leibniz Supercomputing Center (LRZ) in Munich, is being deployed to meet the current and expected future compute needs of researchers, including larger deep learning neural network models and the emergence of multi-modal problems that involve multiple data types such as images and speech, according to Laura Schulz, LRZ's head of Strategic Developments and Partnerships.

    "We're seeing an increase in large data volumes coming at us that need more and more processing, and models that are taking months to train, we want to be able to speed that up," Schulz said.

    Continue reading
  • We have bigger targets than beating Oracle, say open source DB pioneers
    Advocates for MySQL and PostgreSQL see broader future for movement they helped create

    MySQL pioneer Peter Zaitsev, an early employee of MySQL AB under the original open source database author Michael "Monty" Widenius, once found it easy to identify the enemy.

    "In the early days of MySQL AB, we were there to get Oracle's ass. Our CEO Mårten Mickos was always telling us how we were going to get out there and replace all those Oracle database installations," Zaitsev told The Register.

    Speaking at Percona Live, the open source database event hosted by the services company Zaitsev founded in 2006 and runs as chief exec, he said that situation had changed since Oracle ended up owning MySQL in 2010. This was as a consequence of its acquisition that year of Sun Microsystems, which had bought MySQL AB just two years earlier.

    Continue reading
  • Beijing needs the ability to 'destroy' Starlink, say Chinese researchers
    Paper authors warn Elon Musk's 2,400 machines could be used offensively

    An egghead at the Beijing Institute of Tracking and Telecommunications, writing in a peer-reviewed domestic journal, has advocated for Chinese military capability to take out Starlink satellites on the grounds of national security.

    According to the South China Morning Post, lead author Ren Yuanzhen and colleagues advocated in Modern Defence Technology not only for China to develop anti-satellite capabilities, but also to have a surveillance system that could monitor and track all satellites in Starlink's constellation.

    "A combination of soft and hard kill methods should be adopted to make some Starlink satellites lose their functions and destroy the constellation's operating system," the Chinese boffins reportedly said, estimating that data transmission speeds of stealth fighter jets and US military drones could increase by a factor of 100 through a Musk machine connection.

    Continue reading
  • How to explain what an API is – and why they matter
    Some of us have used them for decades, some are seeing them for the first time on marketing slides

    Systems Approach Explaining what an API is can be surprisingly difficult.

    It's striking to remember that they have been around for about as long as we've had programming languages, and that while the "API economy" might be a relatively recent term, APIs have been enabling innovation for decades. But how to best describe them to someone for whom application programming interfaces mean little or nothing?

    I like this short video from Martin Casado, embedded below, which starts with the analogy of building cars. In the very early days, car manufacturers were vertically integrated businesses, essentially starting from iron ore and coal to make steel all the way through to producing the parts and then the assembled vehicle. As the business matured and grew in size, car manufacturers were able to buy components built by others, and entire companies could be created around supplying just a single component, such as a spring.

    Continue reading

Biting the hand that feeds IT © 1998–2022