Fight AI with AI! Code taught to finger naughty deepfake vids made by machine-learning algos

It works for now because the forgeries are quite easy to spot


The rise of AI systems that can generate fake images and videos has spurred researchers in the US to develop a technique to sniff out these cyber-shams, also known as deepfakes.

Generative Adversarial Networks (GANs) are commonly used for creative purposes. These neural networks have helped researchers create made-up data to train artificially intelligent software when there is a lack of training material, and has also assisted artists in creating portraits.

However, like anything tech-related, there is also a sinister side. The technology has been abused by miscreants to paste the faces of actresses, ex-girlfriends, politicians, and other victims, onto the bodies of porn stars. The result is fairly realistic, computer-generated video of people seemingly performing X-rated acts. The fear is this will go beyond fake smut, and into the realms of forged interviews and confessions, especially when combined with faked AI-generated audio.

Now, PhD student Yuezun Li and Siwei Lyu, an associate computer-science professor at the New York state university in Albany, have come up with a technique that attempts to identify deepfake videos, such as those crafted by the open-source DeepFake FaceSwap algorithm.

Deepfakes are, for now, not hard for humans to spot. The doctored videos are uncanny, the facial expressions aren’t very natural, and any motion is pretty laggy and glitchy. They also have a lower resolution than the source material. Thus, people should be able to realize they are being hoodwinked after more than a few seconds. However, as the technology improves, it would be nice if machines could be taught the tell-tale signs of these forgeries so as to alert unaware folks in future.

Detecting deepfakes

Previous attempts to make computers do the job looked at things like the way people blink in videos for signs of any shenanigans. This often required generating deepfakes with GANs first to train other neural network systems used in the detection process.

Li and Lyu’s method, however, doesn’t rely on GANs, and is therefore less time consuming and less computationally intensive. First, they used traditional computer vision techniques to detect faces in 24,442 training images, and extract the facial landmarks.

Next, they warped and twisted the facial features in the images to mimic the eerie effects often seen in deepfake vids. Finally, they trained convolutional neural networks (CNN) on the real and disfigured images to develop classifiers that could at least attempt to detect the probability of a scene being genuine or not. After training, screenshots from videos were then fed into these networks, which indicated whether the faces in the images are likely real or manipulated.

“Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video,” they explained in a paper emitted this month.

"Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks."

The duo applied the aforementioned technique to four different CNNs. The training set contained 49 real videos and 49 DeepFake-generated videos. Each vid featured a single subject, and lasted for about 11 seconds. There were 32,752 frames in total.

shocked_tv

The eyes don't have it! AI's 'deep-fake' vids surge ahead in realism

READ MORE

VGG16, an old CNN system developed by researchers at the University of Oxford in the UK, performed the worse at detecting deepfake images (with 83.3 per cent accuracy), compared to ResNet50 (97.4 per cent), a more popular CNN built by Microsoft researchers.

Other variants, including Microsoft's ResNet101 and ResNet152 came second (95.4 per cent) and third (93.8 per cent), respectively. For deepfake videos as a whole, ResNet101 was best (99.1 per cent), followed by ResNet50 (98.7 per cent), ResNet152 (97.8 per cent) and VGG16 came last (84.5 per cent).

Although promising, the researchers are yet to report meaningful results on deepfake videos and images beyond their carefully curated DeepFake dataset. More testing is needed on real-world forgeries, in other words. Plus, as the quality of GANs and fake content improves, it’ll become harder to detect forgeries using this method, we reckon.

“As the technology behind DeepFake keeps evolving, we will continuing improve the detection method," the academics noted. "First, we would like to evaluate and improve the robustness of our detection method with regards to multiple video compression.

"Second, we [are] currently using predesigned network structure[s] for this task (e.g, resnet or VGG), but for more efficient detection, we would like to explore dedicated network structure[s] for the detection of DeepFake videos." ®

Broader topics


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022