This article is more than 1 year old

Apple's bright idea for CSAM scanning could start 'persecution on a global basis' – 90+ civil rights groups

Letter to Cook & Co warns image-probing tech could also harm kids

More than ninety human rights groups from around the world have signed a letter condemning Apple's plans to scan devices for child sexual abuse material (CSAM) – and warned Cupertino could usher in "censorship, surveillance and persecution on a global basis."

The US-based Center for Democracy and Technology organised the open letter [PDF], which called on Apple to abandon its approach to mass-scanning. Signatories include Liberty and Big Brother Watch in the UK, the Tor Project, and Privacy International.

The letter raises concerns about the accuracy of Apple's technology, arguing that these kinds of algorithms are "prone to mistakenly flag art, health information, educational resources, advocacy messages, and other imagery." And then there are the consequences of governments getting involved.

"Once this capability is built into Apple products, the company and its competitors will face enormous pressure – and potentially legal requirements – from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable," the letter stated.

"Those images may be of human rights abuses, political protests, images companies have tagged as 'terrorist' or violent extremist content, or even unflattering images of the very politicians who will pressure the company to scan for them. That pressure could extend to all images stored on the device, not just those uploaded to iCloud. Thus, Apple will have laid the foundation for censorship, surveillance and persecution on a global basis."

Announced a fortnight ago, Apple's CSAM-probing technology looks for images that are uploaded from iPhones and iPads to iCloud backups; in other words, the vast majority of them. Billed as a child safety initiative, the mass-scanning technology drew instant fury from civil rights advocates.

A closely related product being rolled out at the same time, known as "scan and alert," would tip off parents if their children's iThings received messages deemed explicit by an on-device machine-learning algorithm. The CDT said in a statement: "The scan and alert feature in Messages could result in alerts that threaten the safety and wellbeing of some young people, and LGBTQ+ youths with unsympathetic parents are particularly at risk."

The CSAM-detecting system works by matching hashes of iCloud uploads with a secret database controlled by Apple, which is said to be populated by the US National Center for Missing and Exploited Children (NCMEC), an American equivalent of Britain's Child Exploitation and Online Protection (CEOP) police unit. Once a hash is matched, Apple says human moderators will review images before passing them to police.

Critics have said the hash-matching system is ripe for abuse by authoritarian governments looking to crack down on political dissidents sharing banned speech. Apple says it "will refuse any such demands." Adding fuel to the flames, NCMEC sent a memo to Apple, which was distributed among staff and subsequently leaked, telling the iGiant's engineers not to worry about the scanning technology's critics, described them as "the screeching voices of the minority."

Now infosec bods are reverse-engineering the technology to see how it really ticks. If it can be shown that the image-matching algorithm can be fooled into flagging up an entirely innocent picture, such as via a hash collision, it will render the content-scanning tech of limited use for its stated purpose. ®

More about


Send us news

Other stories you might like