Take the wheel, Arm tells its notebook-grade Cortex-A76 CPU: Now you're a robo-ride brain

Safety critical feature plugged into high-end processor design


Japanese chip designer Arm really doesn't want to be overtaken in the world of autonomous cars by the likes of Intel, Nvidia, and other rivals.

The Softbank-owned semiconductor architects have thus injected a safety feature normally reserved for real-time CPUs into their highest-end application processor core, in a bid to lure system-on-chip designers and automakers to use the technology to literally steer future self-driving cars.

Specifically, Arm will today announce it has added its Split-Lock feature, found in its Cortex-R 32-bit cores used in real-time and safety-critical systems, to the 64-bit Cortex-A76. The result is the Cortex-A76AE. The AE stands for "automotive enhanced," indicating it's aimed at running code controlling self-driving road vehicles.

Split-lock comes in two modes, split and lock. In split mode, the cores in a processor cluster run independently.

In lock mode, two cores pair up and run in lockstep: they fetch, decode, execute, and retire exactly the same instructions from memory at exactly the same time. Since they are identical and running the same code simultaneously, they should, therefore, operate exactly the same at any given time. If they deviate in operation, though, that will raise an alarm inside the chip to signal something has gone wrong.

The idea is that if a random hardware error occurs – caused by cosmic radiation or one of life's unhappy coincidences flipping a transistor gate or on-die memory cell – then the affected CPU core will fall out of step with its twin, alerting the system-on-a-chip electronics. It's just assumed both cores won't suffer the same error at the same time.

If the alarm is raised, the cores can be interrupted, recovered to a good state, and allowed to continue, preventing the random error from causing the system to perhaps spiral into a crash – which would be bad news when powering a computer-controlled car. Keeping this protection mechanism in the system-on-chip avoids having to use an external watchdog just for this job.

Arm Cortex-A76

Arm emits Cortex-A76 – its first 64-bit-only CPU core (in kernel mode)

READ MORE

This lockstep approach is common in safety-critical microcontroller-grade processors, where twin or more cores keep each other in check to make sure random errors do not cause software to make any bad decisions that lead to serious harm to whoever is in or near the machinery or vehicle under the hardware's control. You don't want a flipped bit to fractions of a second later cause a car to suddenly brake and get rear-ended.

What Arm's done here is plug that lockstep safety feature into its Cortex-A76, a CPU core normally destined for top-end smartphones and lightweight battery-friendly touchscreen notebooks. Thus if you design a processor using licensed Cortex-A76AE cores, you can pair them up, and run them in lockstep to ensure they are operating as expected, and thus any crucial decisions made aren't poisoned by random hardware glitches.

Why? Because it wants system-on-chip designers to pick its safety-enhanced Cortex-A cores for power-efficient, performant-enough processors that vehicle manufacturers will use in the brains of self-driving cars and trucks.

That means automakers not picking rival components from Intel, Nvidia, and others developing chipsets for computer-driven jalopies.

Other features

A 7nm 16-core Cortex-A76AE cluster is said to draw less than 15W. Two further AE-class CPU cores are also planned: Helios-AE and Hercules-AE. The A76AE, according to Arm, meets ISO 26262 ASIL D and B safety standards, can sport up to 64 cores per chip, provides Armv8.2's RAS (reliability, availability, and serviceability) features [PDF], supports virtualization, and can use memory protection to wall-off machine-learning acceleration hardware.

That means, according to the documentation, if the system-on-chip includes AI math acceleration components – which are rather useful for autonomous driving – the cores can handle this tech, and set up safeguards to stop neural net code affecting safety critical firmware.

Essentially, you could have four cores in a cluster running in split mode with a hypervisor, operating systems, and general applications and ASIL B-grade code in operation – then four cores in lockstep mode, running a realtime operating system and ASIL D-grade safety-critical vehicle control software on top.

Quoting from a May 2018 UBS study [PDF], Arm executives reckon we won't see truly driverless autonomous rides arriving on our streets before 2027, though. Still, no harm in getting the ball rolling now. The first A76AE cores are "expected" to appear in vehicles from 2020, we're told – presumably in some kind of super-cruise-control system. ®

Broader topics


Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • UK government still trying to get Arm to IPO in London
    Would give its right, er, leg, to keep HQ – and jobs – in Britain

    The UK government is continuing efforts to have chip designer and licensor Arm listed on the London Stock Exchange after its public offering rather than New York, as is the current plan.

    At stake is whether Arm moves its headquarters to the US, potentially leading to the further loss of UK jobs.

    Speaking to the Financial Times, UK minister for Technology and the Digital Economy Chris Philp said the government was still "working closely with" Arm management on the IPO process, despite its parent SoftBank having previously indicated that it was planning to list Arm on the Nasdaq stock exchange in New York.

    Continue reading

Biting the hand that feeds IT © 1998–2022