Software

AI + ML

How to hide a backdoor in AI software – such as a bank app depositing checks or a security cam checking faces

Neural networks can be aimed to misbehave when squeezed


Boffins in China and the US have developed a technique to hide a backdoor in a machine-learning model so it only appears when the model is compressed for deployment on a mobile device.

Yulong Tian and Fengyuan Xu, from Nanjing University, and Fnu Suya and David Evans, from University of Virginia, describe their approach to ML model manipulation in a paper distributed via ArXiv, titled "Stealthy Backdoors as Compression Artifacts."

Machine-learning models are typically large files that result from computationally intensive training on vast amounts of data. One of the best known at the moment is OpenAI's natural language model GPT-3, which needs about 350GB of memory to load.

Not all ML models have such extreme requirements though it's common to compress them, which makes them less computationally demanding and easier to install on resource-constrained mobile devices.

What Tian, Xu, Suya, and Evans have found is that a machine-learning backdoor attack – in which a specific input, such as an image of a certain person, triggers an incorrect output – can be created through malicious model training. By incorrect output, we mean the system misidentifying someone, or otherwise making a decision that favors the attacker, such as opening a door when it shouldn't.

The result is a conditional backdoor.

"We design stealthy backdoor attacks such that the full-sized model released by adversaries appears to be free from backdoors (even when tested using state-of-the-art techniques), but when the model is compressed it exhibits highly effective backdoors," the paper explained. "We show this can be done for two common model compression techniques—model pruning and model quantization."

Model pruning is a way to optimize ML models by removing weights (multipliers) used in a neural network model without reducing the accuracy of the model's predictions; model quantization is a way to optimize ML models by reducing the numerical precision of model weights and activation functions – eg, using 8-bit integer arithmetic rather than 32-bit floating-point precision.

The attack technique involves crafting a loss function – used to assess how well an algorithm models input data and to produce a result that measures how well predictions correspond with actual results – that misinforms compressed models.

"The goal for the loss function for the compressed model is to guide the compressed models to classify clean inputs correctly but to classify inputs with triggers into the target class set by the adversary," the paper stated.

In an email to The Register, David Evans, professor of computer science at University of Virginia, explained that the reason the backdoor is concealed prior to model compression is that the model is trained with a loss function designed for this purpose.

"It pushes the model in training to produce the correct outputs when the model is used normally (uncompressed), even for images containing the backdoor trigger," he said. "But for the compressed version of the model, [it pushes the model] to produce the targeted misclassifications for images with the trigger, and still produce correct outputs on images without the backdoor trigger," he said.

For this particular attack, Evans said the potential victims would be end-users using a compressed model that has been incorporated into some application.

"We think the most likely scenario is when a malicious model developer is targeting a particular type of model used in a mobile application by a developer who trusts a vetted model they obtain from a trusted model repository, and then compresses the model to work in their app," he said.

Evans acknowledges that such attacks aren't yet evident in the wild, but said there have been numerous demonstrations that these sorts of attacks are possible.

"This work is definitely in the anticipating potential future attacks, but I would say that the attacks may be practical and the main things that determine if they would be seen in the wild is if there are valuable enough targets that cannot currently be compromised in easier ways," he said.

Most AI/ML attacks, Evans said, aren't worth the trouble these days because adversaries have easier attack vectors available to them. Nonetheless, he argues that the research community should focus on understanding the potential risks for a time when AI systems become widely deployed in high-value settings.

Consider a bank that is building a mobile app to do things like process check deposits

"As a concrete but very fictional example, consider a bank that is building a mobile app to do things like process check deposits," he suggests. "Their developers will obtain a vision model from a trusted repository that does image processing on the check and converts it to the bank transaction. Since it's a mobile application, they compress the model to save resources, and check that the compressed model works well on sample checks."

Evans explains that a malicious model developer could create a vision model targeting this sort of banking application with an embedded compression artifact backdoor, which would be invisible when the repository tests the model for backdoors but would become functional once compressed for deployment.

"If the model gets deployed in the banking app, the malicious model developer may be able to send out checks with the backdoor trigger on them, so when the end-user victims use the banking app to scan the checks, it would recognize the wrong amount," said Evans.

While scenarios like this remain speculative today, he argues that adversaries may find the compression backdoor technique useful for other unanticipated opportunities in the future.

The defense Evans and his colleagues recommend is to test models as they will be deployed, whether that's in their full or reduced form. ®

Send us news
8 Comments

Petition instructs Jeff Bezos to buy, eat world's most famous painting

Booze-fuelled Change.org campaign implores Amazon founder to 'GOBBLE DA LISA!'

Ultra-billionaire Amazon founder Jeff Bezos has already been the subject of a petition asking him not to return to Earth after he blasts off in his New Shepard rocket on July 20, but even if he is allowed back, Bezos is now facing an even more difficult prospect.

The aerodynamically-pated arch-villain archetype and his vast fortune are increasingly becoming subjects of fascination for the denizens of campaign website Change.org, with multiple petitions currently running, mostly trying to persuade him to divert some of his almost-limitless resources toward good causes.

However, some users are suggesting more novel and entertaining uses for his immense wealth. Change.org user Kane Powell has chosen to use the platform to attempt to persuade Bezos to buy and eat the Mona Lisa, the supposedly priceless Leonardo da Vinci masterpiece housed in the Louvre in Paris.

Continue reading

Microsoft: Try to break our first preview of 64-bit Visual Studio – go on, we dare you

Plus: Updates to .NET 6, ASP.NET Core, and .NET MAUI

Microsoft has unveiled a slew of developer tools, including a preview of the 64-bit Visual Studio 2022, ahead of that developer event set for 24 June.

Preview 1 of Visual Studio 2022 comes direct from the department of never-say-never following version after version of the toolset remaining staunchly 32-bit, even as the hardware world changed around it.

The move to 64-bit was announced earlier this year and is an ambitious one considering the ecosystem and sheer size of the Visual Studio codebase.

Continue reading

Racist malware blocks The Pirate Bay by tampering with victims' Windows hosts file

Hello, 2002 called with one of the oldest low-tech tricks in the book

Malware laced with racial epithets tries to block Windows-based victims from visiting file-sharing sites associated with copyright infringement, according to new Sophos research.

The malicious software amounts to a "goofy process to block people from going to the Pirate Bay," according to Sophos researcher Andrew Brandt, who stumbled across the malware after a colleague mentioned it in passing.

Rather than opening a backdoor for a ransomware gang to exploit or dropping a malicious payload, however, this malware merely sinkholes a bunch of Pirate Bay domain names by adding them to the Windows hosts file and pointing them at 127.0.0.1 – meaning they'll be inaccessible from the victim's machine.

Continue reading

UK gets glowing salute from Bezos-backed General Fusion: Nuclear energy company to build plant in Oxfordshire

Biz will develop Magnetized Target Fusion technology at the site

General Fusion – the Canadian-based atomic outfit backed by Jeff Bezos and a battalion of other major investors – is to build a test facility in Oxfordshire to showcase its power-generating technology.

Following a COVID-friendly handshake, the UK Atomic Energy Authority (UKAEA) has given General Fusion the green light to proceed with its Fusion Demonstration Plant (FDP) at UKAEA's Centre for Fusion Energy Campus in Culham.

The campus – a Royal Navy airbase until it was handed to the UKAEA in 1960 – is home to a cluster of fusion development technologies.

Continue reading

UK financial watchdog dithers over £680k refund from Google (in ad credits, mind you) for running anti-fraud ads

MPs give FCA a telling-off for wasting taxpayer money

The UK's financial regulator is refusing to say whether it will accept an offer by Google to pay back more than £600,000 spent on online ads warning people about the dangers of money scams.

News that Google made the offer came to light earlier this week during oral evidence [PDF] to the Treasury Committee hearing on economic crime. Among those giving evidence was Mark Steward, director of enforcement and market insight at the Financial Conduct Authority (FCA).

He was quizzed by Rushinara Ali, Labour MP for Bethnal Green and Bow, who wanted to know about the £600,000 the FCA is paying Google to run ads warning about online financial scams.

Continue reading

CREST president Ian Glover to retire after 13 years – but where's the transparency, bossman?

UK infosec accreditation body still won't publish exam cheatsheet scandal report nor be interviewed by El Reg

Ian Glover, president of infosec accreditation body CREST, is stepping down from his post, he told the organisation's annual general meeting yesterday.

Sources whispered of Glover's departure to The Register ahead of a mass mailout today to members of the organisation, which oversees some industry-recognised penetration testing exams and certifications in the UK.

"My retirement is something I have been planning for some time and, while I leave with a heavy heart, I am confident CREST will continue to move forward in the hands of an excellent team," said the man himself in a canned statement emailed round CREST member organisations, following his 13 years at the helm.

Continue reading

Playmobil crosses the final frontier with enormous, metre-long Enterprise playset

$500, 136-piece, tribble-laden Star Trek tribute is immense, but clearly illogical

Playmobil is set to boldly go where no three-inch man has gone before with the release of a metre-long replica of the NCC-1701 USS Enterprise from the original Star Trek series.

The enormous model of the Federation Constitution-class vessel will come with standard-scale figures representing the main original series characters – Captain Kirk, Mr Spock, Dr McCoy, Chief Engineer Scott, Lieutenant Uhura, Lieutenant Sulu and Ensign Chekov – and features a removable panel on the disc section revealing "a full 1966-style bridge play environment" to allow children of all ages to recreate their favourite first-contact scenes.

Continue reading

Open standard but not open access: Schematron author complains about ISO paywall

'This is shooting Schematron in the heart ... its heart is individual open source developers'

The original inventor of a popular XML standard, Rick Jelliffe, who created Schematron, has protested that his open source work is now behind a paywall at standards body ISO.

Schematron is a language for validating XML, designed for processing XML documents and reporting on errors. Version 1.0 was developed in 1999, since when it has been enhanced and standardised, with the latest version being ISO/IEC 19757-3:2020.

This replaced the 2016 version: ISO/IEC 19757-3:2016.

Continue reading

Vissles V84: Mechanical keyboard hits all the right buttons for Mac power users

Ideal for Apple fans who appreciate little boxes made of clicky-clacky

Review Mechanical keyboard manufacturers have typically swerved Mac users. It's not personal, it's just business.

The Mac has a fraction of the traditional PC market share, and a significant proportion of mechanical keyboards are intended for competitive gamers, rather than those who type for work (be they developers or writers, or in the case of your correspondent, both).

The Vissles V84 is therefore a bit of an oddity. This compact keyboard (84 keys) ships with a Mac layout by default, although it comes bundled with standard Windows keycaps, as well as the ability to switch into a standard PC layout by pressing down a key combination.

Continue reading

Wanted: Brexit grand fromage. £120k a year. Perks? Hmmmm…

Successful applicant must 'change existing thinking', which is odd as it's all been going so well up to now

No 10 Downing Street - the home of the UK Prime Minister - is looking to hire a big cheese at the Brexit Opportunities Unit to bring a fresh new oomph and zing to Whitehall.

The 17-page job spec (downloadable here) – with perks including hybrid working, childcare benefits, a generous pension, and a loan to buy a bike – is brief.

In a nutshell, it’s this: “The Director, Brexit Opportunities Unit is a high-profile role. It needs someone who can change existing thinking, working across all government departments, developing partnerships with senior stakeholders, including the Prime Minister, to ensure ministerial priorities are met.”

Continue reading

Poltergeist attack could leave autonomous vehicles blind to obstacles – or haunt them with new ones

First 'AMpLe' concept proves worryingly simple to implement with success

Researchers at the Ubiquitous System Security Lab of Zhejiang University and the University of Michigan's Security and Privacy Research Group say they've found a way to blind autonomous vehicles to obstacles using simple audio signals.

"Autonomous vehicles increasingly exploit computer-vision based object detection systems to perceive environments and make critical driving decisions," they explained in the abstract to a newly released paper. "To increase the quality of images, image stabilisers with inertial sensors are added to alleviate image blurring caused by camera jitter.

"However, such a trend opens a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of the emerging image stabiliser hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples."

Continue reading