Apple is about to start scanning iPhone users' devices for banned content, professor warns

For now it's child abuse material but the tech has mission creep written in


Updated Apple is about to announce a new technology for scanning individual users' iPhones for banned content. While it will be billed as a tool for detecting child abuse imagery, its potential for misuse is vast based on details entering the public domain.

The neural network-based tool will scan individual users' iDevices for child sexual abuse material (CSAM), respected cryptography professor Matthew Green told The Register today.

Rather than using age-old hash-matching technology, however, Apple's new tool – due to be announced today along with a technical whitepaper, we are told – will use machine learning techniques to identify images of abused children.

"What I know is that it involves a new 'neural matching function' and this will be trained on [the US National Centre for Missing and Exploited Children]'s corpus of child sexual abuse images. So I was incorrect in saying that it's a hash function. It's much more powerful," said Green, who tweeted at length about the new initiative overnight.

"I don't know exactly what the neural network does: can it find entirely new content that "looks" like sexual abuse material, or just recognize exact matches?" the US Johns Hopkins University academic told El Reg.

Indiscriminately scanning end-user devices for CSAM is a new step in the ongoing global fight against this type of criminal content. In the UK the Internet Watch Foundation's hash list of prohibited content is shared with ISPs who then block the material at source. Using machine learning to intrusively scan end user devices is new, however – and may shake public confidence in Apple's privacy-focused marketing.

Apple infamously refuses to talk to The Register, so asking it to comment on this is a fruitless exercise. Doubtless Cupertino will point to its scanning of (deliberately) unencrypted iCloud backups as precedent for this, saying it's just an incremental step in the ongoing fight against the true evil of child sexual exploitation. Nonetheless, we've asked the fruity firm to comment and faithfully promise here to reproduce their response for the delight and delectation of El Reg's readership.

Governments in the West and authoritarian regions alike will be delighted by this initiative, Green feared. What's to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?

"That is the horror scenario of this technology," said Green. "Apple is the only service that still operates a major E2EE service in China, in iMessage. With this technology public, will China demand that Apple add scanning capability to iMessage? I don't know. But I'm sure a lot more worried about it than I was two days ago."

According to Green, who said he had spoken to people who had been briefed about the scheme, the scanning tech will be implemented in a "two party" design. As he explained it: "Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can't just be a single one."

The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.

If this is the future of using Apple devices, it might not only be sex offenders who question Apple's previously-stated commitment to protecting user privacy.

Updated on 9 August to add:

Apple announced its "child safety measures" here. ®

Similar topics


Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022