This article is more than 1 year old

The sound of silence is actually the sound of a malicious smart speaker app listening in on you

Researchers find nefarious uses for Google Home and Amazon Alexa devices

Google Home and Amazon Alexa can easily be hacked to eavesdrop on users or extract information by asking questions that appear to come from each smart speaker provider, according to researchers.

Both platforms can be extended by third-party developers. Such apps are called Skills for Alexa and Actions for Google Home. These are invoked by voice commands – "Alexa" or "OK Google" followed by the name for the third-party app.

Is it possible for these third-party applications to be malicious? According to Security Research Labs, it is. The team demonstrated a simple hack, whereby the application appears to give an error message stating that the requested app is not available in that country, but in fact keeps running, listening and potentially recording any speech.

It was possible to prolong this period by giving the system unpronounceable characters, the audio equivalent to a blank space in text. The voice assistant thinks it is still speaking but nothing is audible, and therefore it listens for longer.

Users are vulnerable after hearing a fake error message, the researchers claimed, because they do not think the third-party app is running. Therefore the app can now pretend to be Google or Alexa. The example shows the user being told: "There's a new update for your Alexa device. To start it, please say Start followed by your Amazon password."

Person hides face in shocked anticipation of something horrible. Photo via shutterstock

You know that silly fear about Alexa recording everything and leaking it online? It just happened

READ MORE

In reality, these systems never ask for your password, but just as malicious users pretending to be your bank can call you on the phone and extract security information from some subset of people, the same could be true of a voice app. The researchers call this "vishing" – voice phishing.

A troubling aspect of this demonstration is that the researchers reckon they were able to submit their apps for review by Amazon and Google, and then change their behaviour after successfully passing the review.

"Using a new voice app should be approached with a similar level of caution as installing a new app on your smartphone," said the researchers. A problem, though, is that these apps are not installed as such, but are automatically available.

"What the researchers at SR Labs demonstrate is something security and privacy advocates have been saying for some time: having a device in your home which can listen to your conversations is not a good idea," security analyst Graham Cluley told The Reg. "Amazon and Google shouldn't be so naive as to think that a single check when an app is first submitted is enough to verify that the app is always behaving benignly. More needs to be done to protect users of such devices from privacy-busting apps."

The researchers, who have shared their work with Amazon and Google, suggest a more thorough review process for third-party voice apps, detection of unpronounceable characters, and monitoring for suspicious output such as asking for a password.

It is still early days for voice assistants and concerns to date have been more about data gathering by Amazon and Google than misuse by third-party applications. In reality, a blatant example such as that demonstrated by SR Labs would likely be picked up quickly, but that does not remove the possibility of more subtle misbehaviour.

We asked both Amazon and Google for comment. On Monday, a spokesperson for Amazon told us:

We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.

On the subject of why a skill was able to continue working even after it was stopped by a customer, Amazon's PR added: "This is no longer possible for skills being submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified."

Also, it should no longer be possible to trick people with bogus security updates. "We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified," the spokesperson continued.

"This includes preventing skills from asking customers for their Amazon passwords. It’s also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password."

Meanwhile Google had this to say about Google Home Actions, its name for add-on apps for the AI assistant: "All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future." ®

More about

TIP US OFF

Send us news


Other stories you might like