Net privacy wars will be with us always. Let's set some rules
Size matters, and what you do with it. But keep it safe
Opinion Quick question number 1. Do you trust Google? The Movement for an Open Web (MOW) doesn't. It's taking Big G to the UK's Big C – the Competition and Markets Authority – over the forthcoming Chrome IP Protection feature.
Google dragged to UK watchdog over Chrome's upcoming IP address cloakingREAD MORE
Quick question number 2. Do you trust European governments? The Electronic Frontier Foundation (EFF) and hundreds of experts don't, pointing out that elements of proposed revisions to EU regulations called eIDAS would exempt state-approved certificates from security action by browsers.
Let's take each story, both from the past fortnight, in turn. Google's IP Protection is basically an anonymizing proxy that means Chrome passes your IP to a third-party anonymizer. This assigns random IPs that change often enough that nobody can use it to identify you across sites. This is bad, says MOW, because it means only Google can do the tracking, which is unfair on other ad tech companies – for whom MOW speaks. It also encourages fraud and, oh yes, won't someone think of the children?
The eIDAS regulation is about trust. The digital certificates that control the security of protocols like HTTPS are issued by Certificate Authorities (CAs) which are part of a chain of trust. A site with a valid certificate is who it says it is. If a CA is compromised or malevolent, it gets removed from that chain and browsers no longer use keys provided by sites with the bad certs.
eIDAS wants this safety feature turned off for certificates issued by state-approved CAs. Even if the certificates falsely identify fake sites, users won't be able to tell. This would give states, state-approved organisations, or anyone corruptly part of that particular chain of trust, the ability to make fake sites that monitor and decrypt Web traffic silently and at scale. This is another bite of the end-to-end encryption cherry, and for the same reasons – helping fight crime and terrorism, prevent abuse, and, oh yes, think of the children.
These two stories have similarities. Both propose modifications to basic internet functionality in the name of security. Both are being opposed on the grounds that they do the opposite. Who to believe, and how to decide whether to support or decry them?
One way is to look at the combatants. Google is deeply untrustworthy on many levels, with a long history of being caught out doing bad things with data. Nation states are all over the place, but even those with a strong commitment to regulation and the rule of law go as far as they can to grab data. State agencies, even the good ones, regularly do illegal things behind the shield of state security, and are as prone to incompetence and corruption as any human endeavor. MOW is a dark horse; it used to call itself Marketers for an Open Web and has a history of lobbying against Google's anti-tracking moves. Apply whatever rules you feel apply to trusting opaque lobbying groups. The EFF is a fully open group of people with a long record of identifying and warning about harmful attempts to damage user freedoms on the internet. Again, apply the trust you feel fitting.
- Introducing the tech that keeps the lights on
- YouTube cares less for your privacy than its revenues
- Intel's PC chip ship is sinking with Arm-ada on the horizon
- Windows 11: The number you have dialed has been disconnected
Yet trust, especially publicly expressed, is by itself a poor filter on which to make serious decisions. It can be swayed by random experiences, your social group, and where your paycheck comes from. We need a deeper analysis.
All privacy protects the good and the bad, it doesn't matter what the technical details are. Authoritarian regimes demand total privacy for themselves and none for their people, while liberal democracies define and protect personal privacy against intrusion, including that by the state. We have exceptions – search warrants, wiretaps, ISP log disclosures – but within a long-evolved system of oversight by the judiciary. We can assume that every privacy component in IT will change the balance of power between players, and every such component will at some point be attacked like an oyster by a hungry walrus.
Any such attacks can be tested against four factors – how big a change does it make, who is harmed and who benefits, how likely is it to go wrong, and what are the consequences when it does?
Looking at IP Protection, the size of the change it brings is the only factor. There are plenty of ways to anonymize your IP already, and Google strongly denies it will be able to track where others cannot, by dint of the basic architecture. It harms tracking cookies, and thus increases user privacy. It has no obvious harmful mode of failure unique to itself.
The eIDAS regulation makes an enormous change by mandating man-in-the-middle attack technology that it would be illegal for browser makers to defend against. It weakens the security on which the web is built in a unique way for unsophisticated users, while giving a wide range of entities the tools to decrypt data of all kinds. It is as likely to go wrong as any state-run secret security system, through incompetence, accident or malevolence, with consequences that could affect not just the half-billion EU citizens but all those who use EU-based services. Apart from any criminals with enough nous to get around the interception technology, of course.
It is the special nature of IT that it can apply to everyone all at once, in a way that previous state-sanctioned intrusions into our privacy cannot. It is a basic principle of law that the more harm a thing can do, the more heavily it is regulated, and it is impossible to look at the eIDAS proposal without demanding first what oversight and safeguarding is appropriate. IP Protection? Not so much.
This is all by way of proposing very basic risk assessment principles, which is, SpaceX's Starship concrete tornado notwithstanding, hardly rocket science. It is, or should be, the meat and drink of regulators and lawmakers. Especially with the latter, though, it is missing entirely from the most dangerous proposals, and that can't be accidental, not every time. Perhaps we shouldn't ask who we should trust, but who it is that doesn't trust us – and why. ®