This article is more than 1 year old
These boffins' deepfake AI vids are next-gen. But don't take our word for it. Why not ask Zuck or Kim Kardashian...
'Text editing' system for speeches to change meanings emerges along with CEO-goading art attack
Video Once again, artificially intelligent software has been demonstrated automatically editing videos of talking heads to make them say things they haven’t actually uttered. And it's getting better at it. Today, it's altering footage of boffins, and Mark Zuckerberg and Kim Kardashian, but next it could be you. Probably not.
But maybe.
Deepfakes, content doctored by deep-learning algorithms to seemingly change reality, are all the rage at the moment. Open-source code and massive amounts of data scraped from the internet, whether it’s clips of adult-movie actresses or the voice of popular podcaster Joe Rogan, have made it easier to craft deepfakes.
Check out these ones, below, made by a pair of artists, going by the names Bill Posters and Daniel Howe, who collaborated with CannyAI, a tech company based in Israel. They produced fake videos of US President Donald Trump, Facebook CEO Mark Zuckerberg, and celebrity socialite Kim Kardashian saying stuff they never said, as part of a preview of Spectre, an art installation in the UK, and posted them on Instagram.
Here’s a video of what appears to be Zuckerberg talking about controlling billions of people’s stolen data. It's not perfect but you get the idea of where this technology is gradually progressing. Facebook-owned Instagram declined to remove the video, by the way, as it would be rather hypocritical: Facebook refused to take down maliciously altered videos of US politician Nancy Pelosi, after all. Posters and Howe have Zuck over a barrel, here.
here<svg width="50px" height="50px" viewBox="0 0 60 60" version="1.1" xmlns="https://www.w3.org/2000/svg" xmlns:xlink="https://www.w3.org/1999/xlink"><g stroke="none" stroke-width="1" fill="none" fill-rule="evenodd"><g transform="translate(-511.000000, -20.000000)" fill="#000000"><g><path d="M556.869,30.41 C554.814,30.41 553.148,32.076 553.148,34.131 C553.148,36.186 554.814,37.852 556.869,37.852 C558.924,37.852 560.59,36.186 560.59,34.131 C560.59,32.076 558.924,30.41 556.869,30.41 M541,60.657 C535.114,60.657 530.342,55.887 530.342,50 C530.342,44.114 535.114,39.342 541,39.342 C546.887,39.342 551.658,44.114 551.658,50 C551.658,55.887 546.887,60.657 541,60.657 M541,33.886 C532.1,33.886 524.886,41.1 524.886,50 C524.886,58.899 532.1,66.113 541,66.113 C549.9,66.113 557.115,58.899 557.115,50 C557.115,41.1 549.9,33.886 541,33.886 M565.378,62.101 C565.244,65.022 564.756,66.606 564.346,67.663 C563.803,69.06 563.154,70.057 562.106,71.106 C561.058,72.155 560.06,72.803 558.662,73.347 C557.607,73.757 556.021,74.244 553.102,74.378 C549.944,74.521 548.997,74.552 541,74.552 C533.003,74.552 532.056,74.521 528.898,74.378 C525.979,74.244 524.393,73.757 523.338,73.347 C521.94,72.803 520.942,72.155 519.894,71.106 C518.846,70.057 518.197,69.06 517.654,67.663 C517.244,66.606 516.755,65.022 516.623,62.101 C516.479,58.943 516.448,57.996 516.448,50 C516.448,42.003 516.479,41.056 516.623,37.899 C516.755,34.978 517.244,33.391 517.654,32.338 C518.197,30.938 518.846,29.942 519.894,28.894 C520.942,27.846 521.94,27.196 523.338,26.654 C524.393,26.244 525.979,25.756 528.898,25.623 C532.057,25.479 533.004,25.448 541,25.448 C548.997,25.448 549.943,25.479 553.102,25.623 C556.021,25.756 557.607,26.244 558.662,26.654 C560.06,27.196 561.058,27.846 562.106,28.894 C563.154,29.942 563.803,30.938 564.346,32.338 C564.756,33.391 565.244,34.978 565.378,37.899 C565.522,41.056 565.552,42.003 565.552,50 C565.552,57.996 565.522,58.943 565.378,62.101 M570.82,37.631 C570.674,34.438 570.167,32.258 569.425,30.349 C568.659,28.377 567.633,26.702 565.965,25.035 C564.297,23.368 562.623,22.342 560.652,21.575 C558.743,20.834 556.562,20.326 553.369,20.18 C550.169,20.033 549.148,20 541,20 C532.853,20 531.831,20.033 528.631,20.18 C525.438,20.326 523.257,20.834 521.349,21.575 C519.376,22.342 517.703,23.368 516.035,25.035 C514.368,26.702 513.342,28.377 512.574,30.349 C511.834,32.258 511.326,34.438 511.181,37.631 C511.035,40.831 511,41.851 511,50 C511,58.147 511.035,59.17 511.181,62.369 C511.326,65.562 511.834,67.743 512.574,69.651 C513.342,71.625 514.368,73.296 516.035,74.965 C517.703,76.634 519.376,77.658 521.349,78.425 C523.257,79.167 525.438,79.673 528.631,79.82 C531.831,79.965 532.853,80.001 541,80.001 C549.148,80.001 550.169,79.965 553.369,79.82 C556.562,79.673 558.743,79.167 560.652,78.425 C562.623,77.658 564.297,76.634 565.965,74.965 C567.633,73.296 568.659,71.625 569.425,69.651 C570.167,67.743 570.674,65.562 570.82,62.369 C570.966,59.17 571,58.147 571,50 C571,41.851 570.966,40.831 570.82,37.631"></path></g></g></g></svg>View this post on Instagram
It’s not bad, though the voice is, to our ear, dubbed in from an actor: the machine-learning part is matching the footage of the chief exec to the impersonator, it seems. The Kim Kardashian example is better, and her eyeroll and subtle movement of her hands are spot on.
here<svg width="50px" height="50px" viewBox="0 0 60 60" version="1.1" xmlns="https://www.w3.org/2000/svg" xmlns:xlink="https://www.w3.org/1999/xlink"><g stroke="none" stroke-width="1" fill="none" fill-rule="evenodd"><g transform="translate(-511.000000, -20.000000)" fill="#000000"><g><path d="M556.869,30.41 C554.814,30.41 553.148,32.076 553.148,34.131 C553.148,36.186 554.814,37.852 556.869,37.852 C558.924,37.852 560.59,36.186 560.59,34.131 C560.59,32.076 558.924,30.41 556.869,30.41 M541,60.657 C535.114,60.657 530.342,55.887 530.342,50 C530.342,44.114 535.114,39.342 541,39.342 C546.887,39.342 551.658,44.114 551.658,50 C551.658,55.887 546.887,60.657 541,60.657 M541,33.886 C532.1,33.886 524.886,41.1 524.886,50 C524.886,58.899 532.1,66.113 541,66.113 C549.9,66.113 557.115,58.899 557.115,50 C557.115,41.1 549.9,33.886 541,33.886 M565.378,62.101 C565.244,65.022 564.756,66.606 564.346,67.663 C563.803,69.06 563.154,70.057 562.106,71.106 C561.058,72.155 560.06,72.803 558.662,73.347 C557.607,73.757 556.021,74.244 553.102,74.378 C549.944,74.521 548.997,74.552 541,74.552 C533.003,74.552 532.056,74.521 528.898,74.378 C525.979,74.244 524.393,73.757 523.338,73.347 C521.94,72.803 520.942,72.155 519.894,71.106 C518.846,70.057 518.197,69.06 517.654,67.663 C517.244,66.606 516.755,65.022 516.623,62.101 C516.479,58.943 516.448,57.996 516.448,50 C516.448,42.003 516.479,41.056 516.623,37.899 C516.755,34.978 517.244,33.391 517.654,32.338 C518.197,30.938 518.846,29.942 519.894,28.894 C520.942,27.846 521.94,27.196 523.338,26.654 C524.393,26.244 525.979,25.756 528.898,25.623 C532.057,25.479 533.004,25.448 541,25.448 C548.997,25.448 549.943,25.479 553.102,25.623 C556.021,25.756 557.607,26.244 558.662,26.654 C560.06,27.196 561.058,27.846 562.106,28.894 C563.154,29.942 563.803,30.938 564.346,32.338 C564.756,33.391 565.244,34.978 565.378,37.899 C565.522,41.056 565.552,42.003 565.552,50 C565.552,57.996 565.522,58.943 565.378,62.101 M570.82,37.631 C570.674,34.438 570.167,32.258 569.425,30.349 C568.659,28.377 567.633,26.702 565.965,25.035 C564.297,23.368 562.623,22.342 560.652,21.575 C558.743,20.834 556.562,20.326 553.369,20.18 C550.169,20.033 549.148,20 541,20 C532.853,20 531.831,20.033 528.631,20.18 C525.438,20.326 523.257,20.834 521.349,21.575 C519.376,22.342 517.703,23.368 516.035,25.035 C514.368,26.702 513.342,28.377 512.574,30.349 C511.834,32.258 511.326,34.438 511.181,37.631 C511.035,40.831 511,41.851 511,50 C511,58.147 511.035,59.17 511.181,62.369 C511.326,65.562 511.834,67.743 512.574,69.651 C513.342,71.625 514.368,73.296 516.035,74.965 C517.703,76.634 519.376,77.658 521.349,78.425 C523.257,79.167 525.438,79.673 528.631,79.82 C531.831,79.965 532.853,80.001 541,80.001 C549.148,80.001 550.169,79.965 553.369,79.82 C556.562,79.673 558.743,79.167 560.652,78.425 C562.623,77.658 564.297,76.634 565.965,74.965 C567.633,73.296 568.659,71.625 569.425,69.651 C570.167,67.743 570.674,65.562 570.82,62.369 C570.966,59.17 571,58.147 571,50 C571,41.851 570.966,40.831 570.82,37.631"></path></g></g></g></svg>View this post on Instagram
Details of the technology used by CannyAI aren't public, so take the AI part with a pinch of salt. If it truly is machine-learning based, it perhaps works in a similar way to a method revealed this month in a paper by eggheads at Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe.
Text-based editing of talking heads
To use this particular AI system, all you have to do is obtain a video clip and a transcript of someone talking, and then edit that transcript, and run it all through the code, and lip-synch it with edited audio, to produce a video of the person saying the doctored script. You can use it to subtly alter interviews – removing single words to reverse the meaning of sentences, or change one or two words at a time – and invent a new reality.
“We presented the first approach that enables text-based editing of talking-head video by modifying the corresponding transcript,” the paper stated. "As demonstrated, our approach enables a large variety of edits, such as addition, removal, and alteration of words, as well as convincing language translation and full sentence synthesis."
Here’s how it works:
It requires a clear video of a talking head, and a transcript of what is being said in the original video. The team's machine-learning model, a recurrent neural network, carefully analyzes the audio and video to link the person's mouth movements to their speech.
Next, the model takes an edited version of the script, and searches for the person's mouth movements that match the required sounds, in order to get the talking head to visually pronounce the new words. The selected lip movements are blended into the source video at the correct moments to produce footage that appears to show the face saying words not previously spoken. Now the audio needs to be edited: this can be done by cutting words from the original recording as required, or getting an actor to impersonate the target, or using a voice synthesizer to generate a new audio track. When the new audio and doctored video are synchronized, hey presto, you’ve got yourself a deepfake.
For best results, it requires about an hour of video clips of a specific person talking, and the neural network has to be retrained to adjust for every new person.
Generating a synthetic composite mask and adjusting to fit the talking head's face ... Image credit: Fried et al.
AI technology isn’t explicitly needed to make these sorts of deepfakes: someone with tight video-editing skills and software can pull off the same caper, too, with enough time. However, this machine-learning approach aims to be fast and automatic, so anyone can use it whenever they need it. And eventually, with improvements, its output may be harder to detect as fake, due to the smooth blending and subtle tweaks, compared to a fake produced by hand using something like Final Cut Pro.
When the researchers asked 138 people to determine if a collection of videos were doctored or not, the edited videos were rated as real 59.6 per cent of the time, on average (see page 12 of the paper). So, yeah, they’re not convincing enough right now to dupe everyone, though they're good enough for most people.
And as the technology continues to improve, the threat of deepfakes spreading believable false information, made-up interviews and confessions, and lies increases.
The boffins discussed the ethical quandary. “We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse,” they wrote in their paper.
Although they haven’t provided any solutions to counter deepfakes, they hoped that by releasing the details of their research, it would help others develop new “fingerprinting and verification techniques,” such as digital watermarks and signatures, to identify faked or doctored footage.
“We hope that publication of the technical details of such systems can spread awareness and knowledge regarding their inner workings, sparking and enabling associated research into the aforementioned forgery detection, watermarking and verification systems. Finally, we believe that a robust public conversation is necessary to create a set of appropriate regulations and laws that would balance the risks of misuse of these tools against the importance of creative, consensual use cases,” they concluded. ®