This article is more than 1 year old

Scanning phones to detect child abuse evidence is harmful, 'magical' thinking

Security expert challenges claim that bypassing encryption is essential to protecting kids

Exclusive Laws in the UK and Europe have been proposed that would give authorities the power to undermine strong end-to-end encryption in the pursuit of, in their minds, justice.

If adopted, these rules would – according to a top British computer security expert – authorize the reading and analysis of people's previously private communication for the sake of potentially preventing the spread of child sex abuse material and terrorism communications.

Ross Anderson, professor of security engineering in the Department of Computer Science and Technology at the UK's University of Cambridge, argues that these proposed regulations – which, frankly, rely on technical solutions such as device-side message scanning and crime-hunting machine-learning algorithms in place of police, social workers, and teachers – lead to magical thinking and unsound policies.

In a paper titled Chat Control or Child Protection?, to be distributed via ArXiv, Anderson offers a rebuttal to arguments advanced in July by UK government cyber and intelligence experts Ian Levy, technical director of the UK National Cyber Security Centre, and Crispin Robinson, technical director of cryptanalysis at Government Communications Headquarters (GCHQ), the UK's equivalent to the NSA.

That pro-snoop paper, penned by Levy and Robinson and titled Thoughts on Child Safety on Commodity Platforms, was referenced on Monday by EU Commissioner for Home Affairs, Ylva Johansson, before the European Parliament’s Civil Liberties (LIBE) Committee in support of the EU Child Sexual Abuse Regulation (2022/0155), according to Anderson.

The occasion for the debate is the approaching August 3, 2024 expiration of an EU law that authorizes online service providers to voluntarily detect and report the presence of child sexual abuse material in users' communications and files. Without replacement rules, supporters of the proposed child safety regime argue that harmful content will be ignored.

But online rights groups contend the contemplated legislation would cause its own harm.

"The proposed EU Child Sexual Abuse Regulation is a draft law which is supposed to help tackle the spread of child sexual abuse material," said the European Digital Rights Initiative (EDRi), in response to Johansson's proposal.

"Instead, it will force the providers of all our digital chats, messages and emails to know what we are typing and sharing at all times. It will remove the possibility of anonymity from many legitimate online spaces. And it may also require dangerous software to be downloaded onto every digital device."

Meanwhile the UK is considering its own Online Safety Bill, which also imagines bypassing encryption via device-side scanning. Similar proposals, such as the EARN IT bill, keep surfacing in the US.

The paper by Levy and Robinson – itself a response to a paper opposing device-side scanning that Anderson co-authored with 13 other security experts in 2021 – outlines the various types of harms children may encounter online: consensual peer-to-peer indecent image sharing; viral image sharing; offender to offender indecent image/video sharing; offender to victim grooming; offender to offender communication; offender to offender group communication; and streaming of on-demand contact abuse.

Anderson argues that this taxonomy of harms reflects the interests of criminal investigators rather than welfare of children. "From the viewpoint of child protection and children’s rights, we need to look at actual harms, and then at the practical priorities for policing and social work interventions that can minimize them," he says.

Anderson calls into question the data used to fuel media outrage and political concern about harms to children. Citing the 102,842 reports from National Center for Missing and Exploited Children (NCMEC), the US-based non-profit coordinating child abuse reports from tech firms, to the UK's National Crime Agency (NCA), he estimates that this led to 750 prosecutions for indecent images, "well under 3 percent of the 2019 total of 27,233 prosecutions for indecent image offences, of which 26,124 involved images of children." And the number of such prosecutions peaked in 2016 and has since fallen, he says.

"In short, the data do not support claims of large-scale growing harm that is initiated online and that is preventable by image scanning," says Anderson.

The danger of relying on dodgy evidence

However, real harm is done by false positives, he observes, pointing to Operation Ore, an internet child abuse crackdown that began two decades ago and led to false accusations.

Levy and Robinson propose "have language models running entirely locally on the client to detect language associated with grooming." They liken this approach to the on-device CSAM-scanning proposed by Apple (and subsequently shelved, at least for the time being) in the US. While they acknowledge the problems raised at the time – false positives, mission creep, vulnerability to tampering – they assert, "Through our research, we’ve found no reason why client side scanning techniques cannot be implemented safely in many of the situations one will encounter."

Anderson says law enforcement agencies long ago gave up scanning emails for keywords like "bomb" because it doesn't work and because traffic analysis, for which content access is not required, is more effective. And he doesn't expect natural language processing (NLP) models will perform any better.

"The use of modern NLP models to detect illegal speech – whether sexual grooming, terrorist recruitment or hate speech – is highly error-prone," he says. "Our research group has long experience of looking for violent online political extremism, as well as fraud and spam. Going by text content alone, it can be difficult to get error rates significantly below 5–10 percent, depending on the nature of the material being searched for."

With 5 percent false positives, Anderson suggests each of Europe's 1.6 million police officers would have 625 alarms about potential harms to deal with daily – not exactly a practical scenario. That's not to say there aren't options, just that the technical fixes breaking encryption aren't fit for purpose.

In an email to The Register, Anderson indicated the private sector has shown an interest in helping governments get into content scanning.

"There's a company called Thorn that is lobbying for the scanning contract and would love to get a government mandate for its software to be installed into your chat clients," he said.

"The chat service operators would hate that, which may be a reason why Apple produced its own client-side scanning software that caused a storm last year before part of it was withdrawn. There are also some UK startups that GCHQ and the Home Office funded to produce prototypes. Perhaps this would just be used as a means of bullying Big Tech into doing the job themselves.

"However the big soft spot is how Big Tech handles user reporting, which ranges from bad (Facebook) to almost not at all (Twitter). There is a real case for governments to mandate better performance here, as the paper sets out and as I also discuss in my paper on the UK Online Safety Bill, which came out last week."

Anderson in his Chat Control paper suggests that child safety and privacy campaigners could make common cause to advance rules that compel online service providers to take down illegal content when reported.

"At present, tech firms pay attention to takedown requests from the police and from copyright lawyers, as ignoring them can be expensive – but ignore ordinary users including women and children," he says. "That needs to be fixed, whether by criminal sanctions or by significant financial penalties."

Anderson's recommendations for dealing with child abuse focuses on traditional, complicated approaches: quality, community-based policing rather than push-button fixes; social engagement; empowering young people; and respect for human rights.

"The idea that complex social problems are amenable to cheap technical solutions is the siren song of the software salesman and has lured many a gullible government department on to the rocks," Anderson says. "Where ministers buy the idea of a magical software 'solution,' as the industry likes to call its products, the outcomes are often disappointing and sometimes disastrous."

Beyond that, Anderson says that pervasive surveillance, without cause, violates human rights law. "The rule of law must take precedence over 'national security’," he concludes. "We must maintain a moral advantage over competing authoritarian states, not just a military and technological advantage. End-to-end encryption must therefore remain available for moral reasons."

And, he says, encryption must remain for valid cybersecurity reasons, as Levy and Robinson acknowledged previously, and he and his fellow technologists argued in their previous paper. ®

More about

TIP US OFF

Send us news


Other stories you might like