This article is more than 1 year old

Amazon Alexa can be hijacked via commands from own speaker

This isn't the artificial intelligence we were promised

Updated Without a critical update, Amazon Alexa devices could wake themselves up and start executing audio commands issued by a remote attacker, according to infosec researchers at Royal Holloway, University of London.

By exploiting a now-patched vulnerability, a malicious person with some access to a smart speaker could broadcast commands to itself or to other smart speakers nearby, allowing said miscreant to start "smart appliances within the household, buy unwanted items, tamper [with] linked calendars and eavesdrop on the [legitimate] user."

These were the findings of RHUL researchers Sergio Esposito and Daniele Sgandurra, working with Giampaolo Bella of Italy's Catania University. They discovered the flaw, nicknamed Alexa versus Alexa (AvA), describing it as "a command self-issue vulnerability."

"Self-activation of the Echo device happens when an audio file reproduced by the device itself contains a voice command," the researchers said.

The pair said they'd confirmed AvA affected both third- and fourth- generation (the latest release, first shipped in September 2020) Echo Dot devices.

Triggering the attack is as simple as using an Alexa-enabled device to start playing crafted audio files to itself, which the researchers suggested in their paper could be hosted on an internet radio station tunable by an Amazon Echo. In this scenario, a malicious person would simply need to tune the internet radio station (essentially a command-and-control server, in infosec argot) to achieve control over the device.

Executing the attack requires exploitation of Amazon Alexa Skills. These, as Amazon explains, "are like apps that help you do more with Alexa. You can use them to play games, listen to podcasts, relax, meditate, order food, and more."

Here's a flowchart from the paper on how to pull off the AvA technique:

Flowchart of an Alexa v Alexa attack

How to pull off an Alexa versus Alexa attack using a malicious radio station and skill ... Click to enlarge

As you can see, it's a neat way to get around some of the security of the device, depending on the situation. It's a novel method for taking control of a person's Alexa box if you, for instance, trick your victim into running a skill that plays a malicious internet radio station.

Sergio Esposito, one of the research team, told The Register that Speech Synthesis Markup Language (SSML) gave them another route for exploitation with skills, separate from the radio streaming approach. He explained: "It is a language that allows developers to program how Alexa will talk in certain situations, for example. An SSML tag could say that Alexa would whisper or maybe speak with a happy mood."

An SSML break tag, he told us, allowed natural-sounding pauses in scripts read out by Alexa to be extended to an hour long, meaning Alexa was no longer listening to the user's inputs: "So, an attacker could use this listening feature to set up a social engineering scenario in which the skill pretends to be Alexa and replies to the user's utterances as if it was Alexa."

Anyone can create a new Alexa Skill and publish it on the Alexa Skill store; as an example, while briefly reviewing Amazon UK's Skill store, The Register found a Skill on the first page which reads out the lunch menu at a high school in northern India. Skills don't need any special privileges to run on an Alexa-enabled device, though Amazon says it vets them before letting them go live.

Amazon patched most of the vulns except for one where a Bluetooth-paired device could play crafted audio files over a vulnerable Amazon Echo speaker, Esposito told us. The threat model there involves a malicious person being close enough to connect to the speaker (Bluetooth range is about 10m); in that case you may have bigger problems than someone being able to remotely turn your dishwasher on.

One vuln in particular, tracked as CVE-2022-25809, was assigned a medium severity according to the researchers. A US National Vulnerability Database entry described it as "improper neutralization of audio output" and said it affected "3rd and 4th Generation Amazon Echo Dot devices," allowing "arbitrary voice command execution on these devices via a malicious 'Skill' (in the case of remote attackers) or by pairing a malicious Bluetooth device (in the case of physically proximate attackers), aka an 'Alexa versus Alexa (AvA)' attack."

Alexa-enabled devices receive software updates automatically when connected to the internet. You can also use Alexa itself to update to the latest software version for an Echo device, according to Amazon.

"Say, 'Check for software updates' to install software on your Echo device," the vendor suggests.

The researchers are due to present their findings in May at the AsiaCCS conference but curious readers can read all about it on their website.

We have asked Amazon for comment and will update this article if it responds. ®

Updated to add

An Amazon spokeswoman told The Register: "At Amazon, privacy and security are foundational to how we design and deliver every device, feature, and experience. We appreciate the work of independent security researchers who help bring potential issues to our attention, and are committed to working with them to secure our devices. We fixed the remote self-wake issue with Alexa Skills caused by extended periods of silence resulting from break tags as demonstrated by the researchers. We also have systems in place to continually monitor live skills for potentially malicious behavior, including silent re-prompts. Any offending skills we identify are blocked during certification or quickly deactivated, and we are constantly improving these mechanisms to further protect our customers."

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like