Arm sees support path to heterogeneous compute

Toward making software development 'frictionless' over multiple processors, accelerators

Arm says heterogeneous compute architectures – those with a mix of CPUs, GPUs, DPUs, and other processor types – pose a challenge for software developers, and greater multi-architecture support is needed to address this.

Specialized processing, as the chip designer refers to it, will likely succeed Moore's Law for driving innovation. System builders will focus on attributes such as performance, efficiency, and optimization for the task at hand rather than clock speed when combining CPUs, GPUs, DPUs, and other devices, Arm said.

But this specialized processing model upends business as usual for software developers, according to Bhumik Patel, Arm's Director for Software Ecosystem Development. He says in a blog post that the answer is to enable a frictionless experience for developers to achieve multi-architecture support for the software they code.

Arm has a vested interest in this, of course, as many of the DPUs and other accelerators that appear in heterogeneous compute architectures tend to be based around Arm processor cores.

A good example is Nvidia's Bluefield DPU, aimed at applications such as SmartNICs that can offload network processing tasks from the host CPU.

Patel said that frictionless development requires the availability of developer tools across the software stack, and for cloud and edge deployments, this must include the ability to develop applications with cloud-native practices. "Over the past few years, in collaboration with our partner ecosystem, we have enabled the majority of projects across the Cloud Native Computing Foundation landscape, and we continue to drive further adoption of multi-architecture support," he claimed.

Arm said it has also invested in efforts such as Project Cassini and Project Centauri to simplify the process of bringing cloud-native software experiences to edge deployments such as 5G base stations and IoT gateways, kit that is often based on Arm technologies.

Arm said both initiatives have three elements: a product certification process outlining hardware and firmware specifications; a security certification program; and reference implementation guides for software developers.

Hardware and security certifications ensure developers know what to expect from devices that meet Cassini or Centauri standards, while the reference designs remove any time, cost, and effort required to develop on Arm.

VMware is one partner that has worked closely with Arm via Project Cassini to get ESXi-Arm up and running. This is the version of VMware's ESXi hypervisor designed to run on 64-bit Arm silicon, and forms a key part of its Project Monterey effort to enable Arm-based SmartNICs to handle network processing, zero-trust security, and storage acceleration features in VMware-based infrastructure.

Arm may not be the only firm with an interest in cross-platform development, but it is perhaps in a unique position thanks to the diverse range of areas that the Arm architecture is deployed in. And the firm may have yet another architecture to support when its upcoming GPU is introduced.

Patel said Arm is dedicated to accelerating deployments by enabling frictionless development. As long as such initiatives don't get axed as a part of Arm's efforts to make itself leaner and more appealing to investors for its upcoming IPO, of course. ®

Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading

Biting the hand that feeds IT © 1998–2022