Black Hat Deepfakes, the AI-generated talking heads that can say whatever their creator wants them to, are getting harder to detect. But boffins have enlisted an unlikely ally in the quest for truth – mice.
In a presentation at the Black Hat security conference in Las Vegas, data scientists examined various ways to identify deepfake videos – something that is going to become increasingly important as US elections approach in 2020.
George Williams, director of data science at GSI, explained that AIs are better at spotting deepfakes than fleshbags. Earlier this year, humans were pitted against a generative adversarial network (GAN) to call out a selection of deepfakes, and the carbon-based humanoids did pretty well, spotting 88 per cent of fakes. But the machines managed an average rate of 92 per cent.
"That seems pretty good, but when you consider the sheer volume of content that can be put out on social media, you're going to see a lot of mistakes and false positives," he said. "Some of the content will get past both humans and machines."
One solution is to build bigger and better AI systems, said Alexander Comerford, a data science software engineer at Bloomberg LP. And the flood of deepfakes might not be so bad – at first.
"Doing deepfakes is really hard, the infamous Obama one took 17 hours of presidential addresses footage to create," Comerford said. "If you're not a public figure, 17 hours is a lot of data to find. It also took two weeks to train on CPU or two hours on a GPU, and we don't know how long the final touches on the teeth and other features."
More advanced GANS can be created to deal with the issues of deepfakes, but they still have problems because AI systems have problems dealing with the pitch and tone of human voices. The frequency of voices might be one route forward – the letters b and v sound similar but are on completely different frequencies.
Facebook won't nuke deepfakes? OK, let's tear up those precious legal protections from user-posted content, thenREAD MORE
But there are other biological systems that could do better. Jonathan Saunders, a graduate student at the University of Oregon, explained that mice are actually pretty good at spotting human voices and can be trained to do so.
The team trained mice to recognise human speech and got a 75 per cent detection rate for a human using simple speech. That dropped to 65 per cent for complex vocabulary but, by monitoring the neural pattern in a mouse's head, the team reckons this could be an important tool in helping to train AI systems to get better at spotting a fake video.
"We think it's time to use auditory systems, so we should train mice to detect fake and real speech," he said. "People are good at spotting fakes but it's going to be a cat-and-mouse game." [Cue muffled groans from the audience.]
You can read the full research here [PDF]. ®