Free for every Reg reader – and everyone else, too: Arm Cortex-M CPUs for Xilinx FPGAs

Like the blueprints we gave away last time... but... better


XDF If you've ever wanted to embed cheap-and-cheerful Arm Cortex CPU cores into your Xilinx FPGA designs, well, now's your chance.

The processor designer is making its 32-bit microcontroller-grade Cortex-M1 and M3 cores available for Xilinx's Spartan, Artix, and Zynq chips via its DesignStart program. We're told there are no royalty nor license fees involved – the designs are available to download and use completely gratis for Xilinx components. The M1 is available from today, and the M3 by the end of the month.

DesignStart is geared toward rapidly thrusting Arm's lower-end CPU blueprints into the hands of system-on-chip designers who are on a budget or are particularly enamored with FPGAs. You could already get Cortex-M0 and M3 blueprints for no upfront fee via DesignStart for Arm's FPGA prototyping boards. These boards use gate arrays from Altera, which Intel bought in 2015.

So now Arm's extending its program to support FPGA chips from Altera-rival Xilinx. We're told by Arm's director of product management Phil Burr that the previous DesignStart FPGA files were aimed at helping engineers prototype on gate arrays before they design and fabricate custom system-on-chips – whereas these latest blueprints streamline the development of FPGA-powered hardware, all the way from research stages to volume production and deployment.

If you want to switch to manufacturing a custom chipset at some point, you can of course, though you'll have to cough up some royalties for those Cortex-M CPU cores. In effect, Arm hopes the Xilinx DesignStart freebies will encourage engineers to pick its Cortex tech for any future ASICs.

So, in short, there's the existing DesignStart Eval and DesignStart Pro designs that are aimed at system-on-chip architects, and are mostly free, and the new DesignStart FPGA program that's specifically for Xilinx gate arrays and is completely free. And obviously nothing to do with open-source RISC-V cores appearing as FPGA implementations.

Impact

This all means hardware engineers can plonk a relatively simple CPU into their Xilinx FPGA designs, and thus add the ability to execute code on the gate array, increasing the chip's flexibility and capabilities. FPGAs are essentially chips with programmable electronic circuitry, where engineers can change the internal logic as required to perform whatever task is required – controlling I/O buses, performing image correction, machine learning math, and so on.

The Cortex-M1 and M3 cores can therefore be used for management tasks, such as shepherding data to and from a host system and communicating with that host processor, or running small real-time applications, allowing the FPGA to become, say, a programmable industrial controller or internet-of-things system-on-chip.

The cores have been tweaked to use Xilinx's AXI interconnect, so you should just be able to drag'n'drop them into your Xilinx FPGA design. They also take advantage of the features of the FPGA: for example, the integer multiplier in the M1 uses an available builtin multiplier rather than waste general-purpose gates on the feature. The M1 is an FPGA-optimized version of the M0.

According to Burr, one M1 CPU core takes up about 10 per cent of the gates in an Artix-7 gate array.

For what it's worth, Xilinx's higher-end FPGAs include Arm Cortex-A and R CPUs albeit fixed in place separate from the array of programmable gates. The DesignStart files allow designers to drop in as many Cortex-M cores as required onto that array, and complement any beefier Cortex processor cores – it may be that you don't want a powerful Arm core, and a 32-bit microcontroller is more appropriate.

Xilinx marketing director Simon George said his customers wanted to be able to write Arm code and run it on their FPGAs, with as minimal friction as possible, hence the decision to form a relationship with Arm and its DesignStart program. He also said Xilinx is still committed to its own Microblaze CPU architecture, quickly adding that the Cortex-M "is not a second-class citizen" on Xilinx's silicon.

Insitu's Scaneagle 3 unmanned aircraft

FPGAs for AI? GPUs and CPUs are the future, shrugs drone biz Insitu

READ MORE

The latest hotness right now is hardware accelerators, which are typically repurposed graphics cards, or customized chips, that applications on a host device can offload specialist work onto. We're talking workloads such as artificial intelligence, network packet processing, and analytics, which require operations that a custom chipset can be designed and tuned to specifically burn through far faster than a host server's general-purpose processor.

These accelerators require controller CPU cores to glue their subsystems together – particularly memory and IO interfaces to the number-crunching brains – and thus a drop-in Cortex-M core could be just the ticket. FPGAs are also turning up in cloud systems, such as Azure and Amazon Web Services, where they are available for developers to program as required.

As always, it's an engineering tradeoff, a decision on whether you go for the flexibility of an FPGA – with freebie Arm controller cores – or the high performance of a dedicated custom ASIC.

The news of the DesignStart program update emerged on the first day of Xilinx's Developer Forum (XDF), in San Jose, USA, on Monday. ®


Other stories you might like

  • Talos names eight deadly sins in widely used industrial software
    Entire swaths of gear relies on vulnerability-laden Open Automation Software (OAS)

    A researcher at Cisco's Talos threat intelligence team found eight vulnerabilities in the Open Automation Software (OAS) platform that, if exploited, could enable a bad actor to access a device and run code on a targeted system.

    The OAS platform is widely used by a range of industrial enterprises, essentially facilitating the transfer of data within an IT environment between hardware and software and playing a central role in organizations' industrial Internet of Things (IIoT) efforts. It touches a range of devices, including PLCs and OPCs and IoT devices, as well as custom applications and APIs, databases and edge systems.

    Companies like Volvo, General Dynamics, JBT Aerotech and wind-turbine maker AES are among the users of the OAS platform.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022