This article is more than 1 year old

WhatsApp boss says no to AI filters policing encrypted chat

'What's being proposed is that we ... read everyone's messages. I don't think people want that'

The head of WhatsApp will not compromise the security of its messenger service to bend to the UK government's efforts to scan private conversations.

Will Cathcart, who has been at parent company Meta for more than 12 years and head of WhatsApp since 2019, told the BBC that the popular communications service wouldn't downgrade or bypass its end-to-end encryption (EE2E) just for British snoops, saying it would be "foolish" to do so and that WhatsApp needs to offer a consistent set of standards around the globe.

"If we had to lower security for the world, to accommodate the requirement in one country, that ... would be very foolish for us to accept, making our product less desirable to 98 percent of our users because of the requirements from 2 percent," Cathcart told the broadcaster. "What's being proposed is that we – either directly or indirectly through software – read everyone's messages. I don't think people want that."

Strong EE2E ensures that only the intended sender and receiver of a message can read it, and not even the provider of the communications channel nor anyone eavesdropping on the encrypted chatter. The UK government is proposing that app builders add an automated AI-powered scanner in the pipeline – ideally in the client app – to detect and report illegal content, in this case child sex abuse material (CSAM).

The upside is that at least messages are encrypted as usual when transmitted: the software on your phone, say, studies the material, and continues on as normal if the data is deemed CSAM-free. One downside is that any false positives mean people's private communications get flagged up and potentially analyzed by law enforcement or a government agent.

Another downside is that the definition of what is filtered may gradually change over time, and before you know it: everyone's conversations are being automatically screened for things politicians have decided are verboten. And another downside is that client-side AI models that don't produce a lot of false positives are likely to be easily defeated, and are mainly good for catching well-known, unaltered CSAM examples.

Messenger services like WhatsApp and others have been at the center of a years-long debate around encryption and public safety. At issue is whether law enforcement agencies should be allowed to dip into the encrypted communications of billions of individuals to hunt down the Four Horsemen of the Infocalypse: terrorists, drug dealers, CSAM, and organized crime, and whoever else comes along.

UK officials have been pushing for such government freedoms to filter, with Ian Levy, technical director of the UK National Cyber Security Center, and Crispin Robinson, technical director for cryptanalysis at British spy agency GCHQ, recently writing a research paper arguing for automated scanners that manage to protect the privacy of individuals.

"We have not identified any techniques that are likely to provide as accurate detection of child sexual abuse material as scanning of content, and whilst the privacy considerations that this type of technology raises must not be disregarded, we have presented arguments that suggest that it should be possible to deploy in configurations that mitigate many of the more serious privacy concerns," Levy and Robinson wrote. Note the "should be" and "many" caveats.

The European Union in May also proposed legislation that puts much of the responsibility for ferreting out and exposing such material on providers.

Some children's rights groups have been vehement that the need to protect children from exploitation should not be compromised by the argument for absolute privacy in communications. Andy Burrows, head of child safety online policy for the National Society for the Prevention of Cruelty to Children (NSPCC) in the UK, told the Beeb that direct messaging is the "front line" of child sexual abuse. The charity has also asked for AI to be deployed in the front line against CSAM by probing private content on people's devices.

He pushed back at the claim by Cathcart that WhatsApp has detected hundreds of thousands of child sex-abuse images through its own techniques – "more than almost any other internet service in the world" – by saying that it has identified a fraction of the abuse that others, include Facebook and Instagram (both Meta businesses) do.

In an online column last week, the Child Rights International Network tried to set the terms of the debate, noting encryption technologies both protect and harm children in such areas as the rights to privacy and protection from abuse.

"Given the complex interplay between encryption and children's rights, it is not surprise that a debate is currently raging on encryption and public safety, in particular regarding the fight against online child sexual abuse," the organization wrote.

However, communications service providers and mobile device vendors are reluctant to weaken encryption policies. Apple last year announced plans to scan photos of users' iPhones for abusive content before it was uploaded to iCloud. However, the plans were put on hold after complaints from privacy groups and users that doing so compromised security.

"If Apple can't get it right, how can the government?" Monica Horton, policy manager for the Open Rights Group, was quoted as saying. "Client-side scanning is a form of mass surveillance. It is a deep interference with privacy."

Regarding the proposed legislation from the EU, WhatsApp's Cathcart said "what's being proposed is that we – either directly or indirectly through software – read everyone's messages. I don't think people want that." ®

More about

TIP US OFF

Send us news


Other stories you might like