This article is more than 1 year old

Real-time deepfakes can be beaten by a sideways glance

For now at least, until data catches up

Real-time deepfake videos, heralded as the bringers of a new age of internet uncertainty, appear to have a fundamental flaw: they can't handle side profiles.

That's a conclusion from, which specializes in 3D avatars, deepfake technology, and rendering 3D images from 2D photographs. In tests it conducted using popular real-time deepfake app DeepFaceLive, a hard turn to the side made it readily apparent that the person on screen wasn't who they appeared to be. 

Multiple models were used in the test – several from deepfake communities and models included in DeepFaceLive – but a a 90-degree view of the face caused flickering and distortion as the Facial Alignment Network used to estimate poses struggled to figure out what it was seeing. 


A pair of images from Metaphysic's tests showing a deepfaked Jim Carrey, and the result of turning to the side.

"Most 2D-based facial alignments algorithms assign only 50-60 percent of the number of landmarks from a front-on face view to a profile view," said contributor Martin Anderson in a blog post.

Without being able to see enough reference points, the software simply doesn't know how to project its fake face.

Derailing deepfakes

In a matter of just a few years, deepfakes have advanced from being able to superimpose faces onto images, to doing the same in pre-recorded video. The latest advances allow real-time face swapping, which has resulted in more deepfakes being used in online fraud and cybercrime.

A study from VMware found that two thirds of respondents encountered malicious deepfakes as part of an attack, a 13 percent increase from the previous year. Note that the VMware study didn't specify if the deepfake attacks respondents encountered were prerecorded or real time, and only had a sample size of 125 people.

The FBI warned in June of scammers using deepfake technology during remote job interviews. Those using the technique have been spotted interviewing for sensitive jobs that would give them access to customer data and businesses' proprietary information, the FBI said. 

Deepfake videos have also been used to trick live facial recognition software, according to online fraud-combatting startup Sensity AI. Sensity's tests found that nine out of ten vendors' apps were successfully unlocked using a deepfake-altered video streamed from a mobile phone.

Fears over the technology have become serious enough for the European Union to pass laws levying fines on companies that fail to sufficiently fight deepfakes and other sources of disinformation. China also drafted deepfake laws that threaten legal punishment for misuse of the technology, as well as requiring a grant of permission for any legitimate use of deepfakes, which China calls "deep synthesis." 

A workaround for how long?

According to Metaphysic's report, even technology like Nvidia's neural radiance field (NeRF), which can generate a 3D scene from only a few still images, suffers from limitations that make it tricky to develop a good side profile view. 

NeRFs "can, in theory, extrapolate any number of facial angles from just a handful of pictures. [However] issues around resolution, facial mobility and temporal stability hinder NeRF from producing the rich data needed to train an autoencoder model that can handle profile images well," Anderson wrote. We've reached out to Nvidia to learn more, but haven't heard back yet. 

Readers will note that Metaphysic's demonstrations only included celebrity faces, of which plenty of profile views have been captured on film and in photos. The non-famous among us, on the other hand, are unlikely to have many side profile shots on hand.

"Unless you've been arrested at some point, it's likely that you don't have even one such image, either on social media or in an offline collection," Anderson wrote.

Gaurav Oberoi, a software engineer and founder of AI startup Lexion, found much the same when researching deepfakes in 2018. In a post on his blog, Oberoi detailed how deepfakes of comedian John Oliver superimposed over late-night host Jimmy Fallon worked well, but not in profile.

"In general, training images of your target need to approximate the orientation, facial expressions, and lighting in the videos you want to stick them into," Oberoi said. "So if you’re building a face swap tool for the average person, given that the most photos of them will be front-facing, limit face swaps to mostly forward facing videos."

What that means, in effect, is that scammers using real-time deepfakes are unlikely to have the data necessary to create a side profile view that isn't immediately recognizable as fake (provided they're not using a well-photographed celebrity face). 

Until we know deepfakers have found a way to get around this shortcoming, it's a good idea to adopt the policy of asking the person on the other end of Zoom to show you a side view of their face - famous or not. ®

More about


Send us news

Other stories you might like