This article is more than 1 year old
Privacy pathology: It's time for the users to gather a little data – evidence
If Sherlock was alive today, he’d pack a Pi next to pistol and pipe
Opinion Almost exactly a month ago, we noted a splendid piece of academic research into Google's data-gathering and consent practises.
According to the research paper by Trinity College Dublin computer science professor Douglas Leith, the company had been gathering far too much user data from core messaging apps, and the forensic analysis of the network flow extracted a well-deserved mea culpa from Mountain View.
Now, it's Amazon's turn to find itself being examined for privacy breaches. Once again, the well-honed tools of the scientific method were used by researchers to unmask a spy in our lives, in this case the Alexa voice interaction system, Skills (what Alexa calls apps), and the bidding system that sells keyword-driven advertisement slots to advertisers.
Talk to Alexa about something, the academics found, and the auction price for related advertising opportunities goes up.
The complexity of this system is matched by the lack of transparency over how it works, much as the charming simplicity of using Alexa – you just gab at it – hides how it works. If you fancy annoying the device, try asking it what version of its software it's running, what diagnostics it has available, whether there's a debug mode, and so on. No matter what you may know about fault-finding or configuring computers or mobile devices, you won't get anywhere.
And so, the researchers had to design a complicated melange of hardware and software, including the obligatory Raspberry Pi, to load test data into the system and monitor what happened as a result. Read the paper for the details [PDF]. They're uncommonly good: as the researchers say, many reports of this kind lack enough technical detail to encourage others to replicate or build on the work.
Time for a little forensic computer science
By now, there's little argument that a new, legitimate and pressing field of forensic computer science is evolving, that of discovering and characterizing abuses of personal privacy and consent.
We know that the beasts who track us and feast on our data can only be considered as enemies; they promise symbiosis and law-abiding honesty, while blocking any attempts to verify this.
The regulators are overwhelmed and underfunded, the politicians are glacially slow and hard of hearing. The cloud is obdurately opaque; even proprietary software is amenable to decompilation and analysis if you have the code, but what goes on beyond the API is a true trade secret.
Science is our only hope.
Here, at last, is something the big tech firms can't hide. We know that they're collecting all that data for a purpose, and that for it to be worthwhile that purpose must be manifest.
With the Alexa finding, it was skewing the advertising market; the signal the researchers found which proved lack of compliance. This is the same realization that dogs all the intelligence agencies of the world: if you use the information you've found, sharp eyes will notice.
- Your AI can't tell you it's lying if it thinks it's telling the truth. That's a problem
- Any fool can write a language: It takes compilers to save the world
- 114 billion transistors, one big meh. Apple's M1 Ultra wake-up call
- IPv6 is built to be better, but that's not the route to success
So, as with any novel process or phenomenon, the scientist selects the tests and watches the results. Whether the process wants to be understood or not isn't part of the equation. As chemists, physicists and biologists share techniques and data sets just as much as they do individual findings, a priority of this new science of data privacy must be to recognise itself as a field and start to curate its knowledge and aim for collective process.
The extremely smart but largely inaccessible tools and techniques demonstrated by both the Google and Amazon researchers of late should become as standard as lab equipment. It'll take a while, and funding for those who try to characterize the misdeeds of the very rich is often peculiarly hard to come by, no matter how pressing the need.
Even so, as in the early days of any new science, there is room for the amateur to do good work. Some experiments need nothing more than you, your devices, a pinch of methodology and a basic grasp of statistics. You don't even need a Raspberry Pi.
Pick five things at random that you'd never buy or find of the slightest interest, like wheelbarrows, golf clubs, collectable pig-themed ceramics, Windows 11. Discuss them in five different ways online – or even just in earshot of smart devices.
Before, during and after, meticulously note the number of ads or unrequested content you see, and the percentage, if any, of those which touch on the test data.
Do this with application over a decent period, much as a Victorian parson would map the ecology of his local meadow over the seasons, and the patterns will emerge. They can't help it. It's what they do.
The internet itself is an admirable substitute for the Victorian postal system when it comes to correspondence with like minds and the dissemination of transactions. Many secrets will be revealed.
There is no Nobel prize for any sort of computing, let alone the new science of data privacy. There is the reward of using the miscreants' strengths as vulnerabilities against themselves, of substituting the conscience that they lack with the knowledge that they're being watched, and of making the internet a less wretched, more honest place.
Fair exchange for spending five minutes talking about ceramic pigs. ®