Attack of the clones: If you were relying on older Xilinx FPGAs to keep your product's hardware code encrypted and secret, here's some bad news

Decrypted configuration bitstream can be siphoned from chips via side-channel flaw

A newly disclosed vulnerability in older Xilinx FPGAs can be exploited to simplify the process of extracting and decrypting the encrypted bitstreams used to configure the chips.

In other words, it's now easier to produce clones of products that use these vulnerable Xilinx components. It's not really a terrifying security flaw; it's more an interesting hack that's not supposed to be possible.

For the uninitiated, FPGAs – field programmable gate arrays – are packed with internal circuitry you can arrange and configure as required: you can place an FPGA in a product you're making, and configure it to direct sensor readings to a microprocessor, or control robotic motors, or filter network packets, or process wireless signals, or control other electronics in the system, or or whatever you want, really.

You arrange the internal building blocks of logic in an FPGA by writing code in a hardware design language, such as Verilog, and compile it into a bitstream. This bitstream is stored, typically, in flash memory, and read by the FPGA when it is powered on. It uses the sequence of bits to configure and connect up its internal circuitry to perform its intended operation.

You probably don't want your bitstream to be easily copied, though, otherwise someone could buy your FPGA-powered product, extract the gate array's bitstream from flash memory, and use it to configure a compatible FPGA in their own product to make a clone of your gadget. (At a push, they could also reverse engineer your FPGA design from the bitstream, though that's not terribly easy to do because the format of this data is not publicly documented by vendors, typically.)

Cryptography to the rescue

There's a solution: you can encrypt your bitstream with AES-CBC and an encryption key, and burn that secret key into the FPGAs you bought as they are placed into your product at your factory. You then store the encrypted bitstream in flash memory, the FPGA in the device reads it, decrypts the stream using the secret key you gave it, and configures itself. If your rival tries to use the encrypted bitstream in compatible FPGAs they bought from the same supplier, it won't work because those FPGAs won't have the secret key.

Unfortunately for you, though, it's now easy to fully extract the decrypted version of that encrypted bitstream once it's been loaded by the gate array. This can be done by exploiting a vulnerability dubbed Starbleed that lies within older-generation Xilinx Virtex-6 and 7-Series FPGAs.

Maik Ender and Amir Moradi, of the Horst Goertz Institute for IT Security at Ruhr-University Bochum in Germany, along with Christof Paar of the Max Planck Institute for Cyber Security and Privacy, also in Germany, discovered the hole, and described it in a published paper [PDF] this month. There is no known mitigation or workaround other than to buy updated silicon.

The trio homed in on a register called WBSTAR within the FPGA: this register defines the memory address where the FPGA should start reading in its bitstream after a warm boot, and is set by the bitstream previously loaded from memory. The idea being: you make the FPGA load a bitstream from a default location in memory, such as in ROM, and this bitstream sets WBSTAR to point to an updated bitstream in flash memory so that when the FPGA is restarted, it picks up the updated bitstream from flash, thus allowing the chip to safely load in an updated configuration without bricking the system. WBSTAR is not changed across resets.

Here's the genius twist: you take the encrypted bitstream and you manipulate it just enough to cause it to write a 32-bit word of its post-decrypted state into WBSTAR. This manipulated bitstream will cause the FPGA to reset, because it fails a cryptographic integrity check. You make sure the FPGA loads a second unencrypted bitstream that outputs the value of WBSTAR so you can read and log it. Then you repeat the process over and over.

And voila, you can gradually leak the decrypted contents of the encrypted bitstream via repeatedly writing to WBSTAR, resetting, and reading WBSTAR, reconstructing the bitstream's plaintext. Crucially, WBSTAR is updated by the manipulated encrypted bitstream before the integrity check is performed, allowing it to leak data before the reset is triggered.

Xilinx's Versatile Premium FPGA

Xilinx's high-end Versal FPGA is like a designer handbag. If you need to ask the price, you probably can't afford it


The time needed to do all of this varies based on the size of the bitstream, though the team estimates it can range from around four to ten hours for full extraction to take place. Once that is done, you would have an unencrypted copy of the bitstream for that chip.

While there is the potential for this to be used for hijacking someone's hardware – extracting the decrypted bitstream, modifying it, then feeding it back into a device to alter its operation – that scenario is unlikely. It would be time-consuming to carry out in the field. Honestly, if a baddie had access to the device at that level for that long, there would be a hundred worse things they could do without needing to mess with the FPGA.

In that sense, Starbleed doesn't make a lot of sense as a security risk outside of a lab setting. If you're worried about someone using this to tamper with your FPGA-enabled gear, don't.

Rather, it seems the primary exploitation of this bug would be intellectual property theft.

Imagine, if you will, a less-than-scrupulous device manufacturer wants to make their own version of a rival's hardware. They would procure the gear they wanted to rip off, take it into the lab for a day to extract the unencrypted bitstream via the Starbleed procedure, then use it to configure the FPGAs in their own products. (Yes, this would be very illegal and result in a shoddy piece of knock-off kit. 你想说啥?)

This isn't the first time researchers have figured out a way to lift the bitstream from an FPGA chip, though Starbleed looks to be the easiest by a long chalk. Previous studies have relied on techniques such as hitting the chips with near-infrared light or lasers to discern the internal configuration.

While not exactly simple in its own right, Starbleed is relatively easy to carry out in comparison, as it only needs a cable and a debug interface.

"Generally, the adversary can be anyone who has access to the JTAG or SelectMAP configuration interface, even remotely, and to the encrypted bitstream of the device under attack," the research trio explained. "In contrast to side-channel and probing attacks against bitstream encryption, no adequate equipment nor expertise in electronic measurements is needed."

As you might imagine, Xilinx is not exactly thrilled to see boffins disclosing a new method for hacking their gear, though the chip designer pointed out that as far as real-world hacking risks go, it's nothing much to be afraid of.

The FPGA slinger did work with the academics prior to the paper going live, and it should be noted that the latest Xilinx 7nm FPGA models (as well as the previous 16nm and 20nm generations) are not susceptible to this vulnerability. ®

Other stories you might like

  • Lenovo halves its ThinkPad workstation range
    Two becomes one as ThinkPad P16 stands alone and HX replaces mobile Xeon

    Lenovo has halved its range of portable workstations.

    The Chinese PC giant this week announced the ThinkPad P16. The loved-by-some ThinkPad P15 and P17 are to be retired, The Register has confirmed.

    The P16 machine runs Intel 12th Gen HX CPUs, but only up to the i7 models – so maxes out at 14 cores and 4.8GHz clock speed. The laptop is certified to run Red Hat Enterprise Linux, and can ship with that, Ubuntu, and Windows 11 or 10. The latter is pre-installed as a downgrade right under Windows 11.

    Continue reading
  • US won’t prosecute ‘good faith’ security researchers under CFAA
    Well, that clears things up? Maybe not.

    The US Justice Department has directed prosecutors not to charge "good-faith security researchers" with violating the Computer Fraud and Abuse Act (CFAA) if their reasons for hacking are ethical — things like bug hunting, responsible vulnerability disclosure, or above-board penetration testing.

    Good-faith, according to the policy [PDF], means using a computer "solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability."

    Additionally, this activity must be "carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services."

    Continue reading
  • Intel plans immersion lab to chill its power-hungry chips
    AI chips are sucking down 600W+ and the solution could be to drown them.

    Intel this week unveiled a $700 million sustainability initiative to try innovative liquid and immersion cooling technologies to the datacenter.

    The project will see Intel construct a 200,000-square-foot "mega lab" approximately 20 miles west of Portland at its Hillsboro campus, where the chipmaker will qualify, test, and demo its expansive — and power hungry — datacenter portfolio using a variety of cooling tech.

    Alongside the lab, the x86 giant unveiled an open reference design for immersion cooling systems for its chips that is being developed by Intel Taiwan. The chip giant is hoping to bring other Taiwanese manufacturers into the fold and it'll then be rolled out globally.

    Continue reading

Biting the hand that feeds IT © 1998–2022