This article is more than 1 year old

Time to start taking machine-learning security seriously, Microsoft boffin insists

Forget academic adversarial attacks and imagine if ML were the weakest link in the chain

Enigma When Microsoft surveyed 28 organizations last year about how they viewed machine learning (ML) security, its researchers found that few firms gave the matter much thought.

"As a result, our collective security posture is close to zero," said Hyrum Anderson, principal architect in the Azure trustworthy machine learning group at Microsoft, during a presentation at USENIX's Enigma 2021 virtual conference.

Pointing to Microsoft's survey [PDF], Anderson said almost 90 per cent of organizations – 25 out of 28 – didn't know how to secure their ML systems.

The issue for many of these companies is that commonly cited attacks on ML systems – like an adversarial attack that makes an ML image recognition model classify a tabby cat as guacamole – are considered too speculative and futuristic in light of ongoing attacks that need to be dealt on a frequent basis, like phishing and ransomware.

But Anderson argues that ML systems shouldn't be considered in isolation. Rather, they should be thought of as cogs in a larger system that can be sabotaged, a scenario that has ramifications beyond the integrity of a specific ML model.

"For example, if an adversary wanted to commit expense fraud, she could do this by digitally altering real receipts to fool an automated system similar to the tabby cat and guacamole example," he said. "However, a much easier thing to do is to simply submit valid receipts to the automated system that do not represent legitimate business expenses."

In other words, securing ML models is a necessary step to defend against more commonplace risks.

The fate of Microsoft's Tay twitter chatbot illustrates why ML security should be seen as a practical matter rather than an academic exercise. Launched in 2016 as interactive entertainment, Tay had been programmed to learn language from user input. Within 24 hours, Tay was parroting toxic input from online trolls and was subsequently deactivated.

Call in the squad

Nowadays, having learned to take ML security seriously, Microsoft conducts red team exercises against its ML models. A red team exercise refers to an internal team playing the role of an attacking entity to test the target organization's defenses.

Anderson recounted one such red team inquiry conducted against an internal Azure resource provisioning service that uses ML to dole out virtual machines to Microsoft employees.

Microsoft relies on a web portal to allocate physical server space for virtualized computing resources. The savings for doing so at a company with over 160,000 employees can be significant, said Anderson.

"witch" Effigy burns..

Sick of AI engines scraping your pics for facial recognition? Here's a way to Fawkes them right up

READ MORE

"In our red team engagement, we took the role of an adversary who wishes to cause an indiscriminate denial of service attack through a so-called noisy neighbor attack, by tricking the system to deploy hungry resources on physical hardware that contained high availability service containers," he explained. "Evading the ML model for this is a linchpin for this exercise."

The goal of the exercise was to determine if the red team could cause a system availability violation through an ML integrity violation. And the exercise was conducted without direct access to the ML model being used to allocate computing resources.

During the reconnaissance phase of the exercise, the red team found that its credentials gave it access to two critical pieces of information, Anderson recounted. First was that the team had read-only access to training data for the model and second was that the team found details about how the model handled data featurization – the conversion of data into numerical vectors.

That alone, he explained, was enough to allow the red team to construct its own ML model to test its attack offline.

Using a replica ML model they were able to construct, the red team managed to identify a number of "evasive variants." These are inputs to the ML model that would guarantee when the model would predict a resource request would be excessive. Anderson said the team identified various resource configurations that could be requested to make the ML model see a friendly, low-resource container as one that's oversubscribed.

"Having those resource requests that would guarantee an oversubscribed condition, we can then instrument a virtual machine, for example, with hungry resource payloads, high CPU utilization, and memory usage that would then be over-provisioned and cause a denial of service to the other containers on the same physical host," he explained.

Anderson said there are several things that should be taken from this example. First, he said, internal ML models are not safe by default.

"Even though a model may not be directly accessible to the outside world, there are paths by which an attacker can exploit it to cause cascading downstream effects in an overall system," he said.

Second, he said, permissive access to data or code can lead to model theft.

"This seems really simple, but ask your data science team how they set up permissions around their data and their code," he said.

Third, check the model output before taking prescriptive actions, which is another way of saying include a sanity check in your system. That might mean auditing one out of every thousand outputs, including a human in the loop periodically, or simply implementing guardrails like never oversubscribing a 24-core VM no matter what the model predicts.

And fourth, make sure to log model behavior and development during training and deployment. "Even if there's no active program or person monitoring those logs in real time, we should always have an assumed breach mindset," Anderson said. ®

More about

TIP US OFF

Send us news


Other stories you might like