AMD’s Xilinx-enhanced Epycs are right up the alley of datacenter builders

Same goes for Arm, for the same reason: Power efficiency

Comment AMD’s plans to integrate AI functionality from its Xilinx FPGAs with its Epyc server microprocessors presents several tantalizing opportunities for systems builders and datacenter operators alike, Glenn O’Donnell, research director at Forrester, told The Register.

A former semiconductor engineer, O’Donnell leads Forrester’s datacenter and networking studies. He sees several benefits to the kind of tight integration at the die or package level promised by AMD’s future CPUs.

“The more you can put on the same die or on the same package, the better,” he said.

Greener datacenters

One of the biggest benefits of integrating dedicated accelerators — like Xilinx’s AI engine — onto the CPU package is power consumption. It takes a lot of power to bring data on and off of the chip, O’Donnell said. “If you can do it on chip or on package, it’s going to be a lot more efficient.”

And power consumption is a major concern for OEMs and datacenter operators, many of which have announced sweeping carbon neutrality goals in recent years.

So it’s no surprise why AMD CTO Mark Papermaster suggested domain-specific processors — like Xilinx FPGAs — will play a key role in meeting the company’s ambitious goal of delivering a 30-fold increase in power efficiency across its high-performance compute portfolio by 2025.

Greater power efficiency has other benefits that indirectly contribute to lower datacenter operating costs. The less efficient the chip, the greater the ratio of power that gets turned into waste heat, O’Donnell explained.

“Something that’s hard to cool means a lot of the electrical power you’re drawing is just going up in smoke. It’s generating heat instead of doing compute,” he said. “The more we can shift that towards compute and away from heat, that’s better for everybody, especially the planet.”

As much as 40 percent of datacenter power consumption today can be directly attributed to keeping the systems cool, Dell’Oro Group analyst Lucas Beran, told The Register.

And as chipmakers push toward greater power densities necessary to meet surging demand for AI/ML and data-intensive workloads, the thermal design power continues to creep upward. Nvidia's latest GPUs, for example, are now available in configurations up to 700 watts.

“The appetite customers have for this kind of power is not dwindling by any stretch,” O’Donnell added.

The Edge and the Arm threat

While EPYC chips with integrated AI processing may soon find their way into the datacenter, O’Donnell finds it more likely the tech will take off at the edge, where power consumption and thermal characteristics are often the limiting factor.

“That’s where a lot of the demand is going to go. Datacenter is big, but I think edge is going to be much, much bigger — orders of magnitude bigger,” he said.

But for this technological gold rush to work, “we need the right kind of compute capability out at the edge, and it’s not necessarily the same kind thing that’s going to be in the datacenter,” O’Donnell added.

Here, AMD and its contemporaries face an existential threat: highly-efficient, Arm-based processors. “When you look at what’s going on, having an Arm-based architecture in some ways, has a competitive advantage because at the edge, power consumption becomes even more important,” O’Donnell said. “I’ve had big conversations with the big chipmakers about the Arm threat and they’re taking it very seriously.”

“I don’t see AMD versus Intel or Nvidia as being the big killer battlefront. I see Arm being the big killer battlefront, precisely because it’s more energy efficient,” he added.

Is Pensando next?

Xilinx’s AI engines might be the first to get integrated into AMD’s processors, but it’s unlikely to be the last.

Last month, AMD announced it would acquire networking startup Pensando in a deal valued at $1.9 billion.

While the purchase better positioned AMD to compete with Intel and Nvidia in the smartNIC and data processing unit (DPU) space, Forrest Norrad, head of AMD’s Data Center Solutions Group, previously hinted the technology could be integrated into the chipmaker’s CPUs.

“It certainly makes a lot of sense,” O’Donnell said. “Why would you buy a company like that and not try to do something like that.”

Pensando’s DPU tech has the potential to vastly improve chip interconnects, enabling much denser compute platforms, he added.

“If you can put that interconnect down as close to the silicon as possible, you now have an interconnect for multiprocessor systems that blows the doors off anything that exists today,” O’Donnell said. “I really do believe we’re going to see a lot of the same software-defined networking concepts coming right down to the motherboard.”

“You could get some real screaming performance out of a system like that,” he said.

Intel, Nvidia aren't far behind trend

AMD is by no means the only vendor looking to integrate additional co-processors or accelerators onto the CPU package.

Intel’s Sapphire Rapids Xeon Scalables — assuming they don’t get delayed again — will see Intel pivot to a chiplet architecture in the first half of 2022. The transition is expected to help Intel to achieve core-densities more inline with rival AMD, but its not the only reason the company has embraced a tiled architecture.

In addition to CPU tiles, the company is exploring a number of accelerator tiles, including GPUs, that can be packaged with along the CPU as a dedicated chiplet die.

Meanwhile, Nvidia at GTC this spring, announced it’d integrated a ConnectX7 smartNIC into its H100 line of GPUs in a bid to eliminate network bottlenecks in applications like multi-node AI training and 5G signal processing. ®

Broader topics

Other stories you might like

  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading
  • Big Tech loves talking up privacy – while trying to kill privacy legislation
    Study claims Amazon, Apple, Google, Meta, Microsoft work to derail data rules

    Amazon, Apple, Google, Meta, and Microsoft often support privacy in public statements, but behind the scenes they've been working through some common organizations to weaken or kill privacy legislation in US states.

    That's according to a report this week from news non-profit The Markup, which said the corporations hire lobbyists from the same few groups and law firms to defang or drown state privacy bills.

    The report examined 31 states when state legislatures were considering privacy legislation and identified 445 lobbyists and lobbying firms working on behalf of Amazon, Apple, Google, Meta, and Microsoft, along with industry groups like TechNet and the State Privacy and Security Coalition.

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading

Biting the hand that feeds IT © 1998–2022