This article is more than 1 year old

Alexa, swap out this code that Amazon approved for malware... Installed Skills can double-cross their users

Boffins find those developing apps for the chatty AI assistant can bypass security measures

Computer security bods based in Germany and the US have analyzed the security measures protecting Amazon's Alexa voice assistant ecosystem and found them wanting.

In research presented on Wednesday at the Network and Distributed System Security Symposium (NDSS) conference, researchers describe flaws in the process Amazon uses to review third-party Alexa applications known as Skills.

The boffins – Christopher Lentzsch and Martin Degeling, from Horst Görtz Institute for IT Security at Ruhr-Universität Bochum, and Sheel Jayesh Shah, Benjamin Andow (now at Google), Anupam Das, and William Enck, from North Carolina State University – analyzed 90,194 Skills available in seven countries and found safety gaps that allow for malicious actions, abuse, and inadequate data usage disclosure.

The researchers, for example, were able to publish Skills using the names of well-known companies, which makes trust-based attacks like phishing easier. And they were also able to revise code after it had been reviewed without further scrutiny.

We show that not only can a malicious user publish a Skill under any arbitrary developer/company name, but she can also make backend code changes after approval

"We show that not only can a malicious user publish a Skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information," the academics explain in their paper, titled "Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem." [PDF]

By failing to check for changes in Skill server logic, Amazon makes it possible for a malicious developer to alter the response to an existing trigger phrase or to activate a previously approved dormant trigger phase. A Skill manipulated in this way could, for example, start asking for a credit card after passing Amazon's initial review.

The researchers also found that the permission system Amazon uses to protect sensitive Alexa data can be bypassed. The problem is that just because a developer doesn't declare the intent to use an API for a sensitive data type like a credit card number, that doesn't preclude a developer's Skill from asking for or collecting that information.

"We tested this by building a skill that asks users for their phone numbers (one of the permission-protected data types) without invoking the customer phone number permission API," the paper explains. "Even though we used the built-in data type of Amazon.Phone for the intent, the skill was not flagged for requesting any sensitive attribute."

Person hides face in shocked anticipation of something horrible. Photo via shutterstock

You know that silly fear about Alexa recording everything and leaking it online? It just happened

READ MORE

The boffins identified 358 Skills capable of requesting information that should be protected by a permission API.

They also found that Skill squatting – e.g. Skills that try to get people to invoke them inadvertently by implementing invocation and intent names that sound similar to the invocation and intent names of legitimate Skills – is common.

At the same time, they observe that this isn't being done maliciously, to their knowledge. Rather, it appears mainly to be a way for developers to piggyback on the popularity of their own existing Skills – having two Skills activated by nearly identical phrases increase the likelihood some of their software will run.

Finally, the researchers found that almost a quarter (24.2 per cent) of Alexa Skills don't fully disclose the data they collect. They contend this is particularly problematic for Skills in the "kids" and "health and fitness" categories due to the higher privacy standards expected by regulators. Along similar lines, they say about 23.3 per cent of Alexa Skill privacy policies don't adequately explain the data types associated with the permissions being requested.

Problems add up

These findings coincide with the arrival of another paper exploring Alexa security, from computer scientists Yanyan Li, Sara Kim, Eric Sy at California State University, San Marcos. Their work, titled, "A Survey on Amazon Alexa Attack Surfaces," looks at Alexa more broadly.

It doesn't unearth previously unknown vulnerabilities. Rather it provides an overview of various attack vectors related to voice capturing, voice traffic transmission, Alexa voice recognition, Alexa skill invocation, Lambda functions and Amazon S3 buckets. It also proposes a variety of potential mitigations, all of which would require Amazon to invest additional time and resources to lock its ecosystem down.

The researchers from Germany and the US say Amazon has confirmed some of its findings and is working on countermeasures.

Amazon couldn't quite bring itself to acknowledge that when asked to comment, admitting only that it's always working on security. A company spokesperson said the company was aware of the work by Lentzsch and colleagues and is still reviewing the second paper.

"The security of our devices and services is a top priority," Amazon's spokesperson said in an email to The Register. "We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior."

"Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers. We appreciate the work of independent researchers who help bring potential issues to our attention." ®

More about

TIP US OFF

Send us news


Other stories you might like