This article is more than 1 year old

The eyes don't have it! AI's 'deep-fake' vids surge ahead in realism

Watch John Oliver magically transforming into Stephen Colbert

Videos Using AI to make fake videos look as realistic as possible is all the rage at the moment.

Developers aren't deterred by the controversy surrounding deepfakes – videos in which people's faces are digitally pasted onto the bodies of smut stars and other performers using machine-learning software.

OK, sure, adding Nicholas Cage’s face randomly into movie scenes is pretty funny. But merging the faces of celebrities, politicians, or ex-girlfriends onto the bodies of porno actresses? Not so much.

Despite all this, many are still pushing for new algorithms that create fake videos that are even more lifelike. Researchers from Carnegie Mellon University and Facebook Reality Lab are presenting Recycle-GAN, a generative adversarial system for “unsupervised video retargeting” this week at the European Conference on Computer Vision (ECCV) in Germany.

Unlike most methods, Recycle-GAN doesn’t rely on learning an explicit mapping between the images in a source and target video to perform a face swap. Instead, it’s an unsupervised learning method that begins to line up the frames from both videos based on “spatial and temporal information”.

In other words, the content that is transferred from one video to another not only relies on mapping the space but also the order of the frames to make sure both are in sync. The researchers use the comedians Stephen Colbert and John Oliver as an example. Colbert is made to look like he is delivering the same speech as Oliver, as his face is use to mimic the small movements of Oliver’s head nodding or his mouth speaking.

Youtube Video

Here’s one where John Oliver is turned into a cartoon character.

Youtube Video

It’s not just faces, Recycle-Gan can be used for other scenarios too. Other examples include synching up different flowers so they appear to bloom and die at the same time.

Youtube Video

The researchers also play around with wind conditions, turning what looks like a soft breeze blowing into the trees into a more windy day without changing the background.

Youtube Video

"I think there are a lot of stories to be told," said Aayush Bansal, co-author of the research and a PhD. student at CMU."It's a tool for the artist that gives them an initial model that they can then improve," he added.

Recycle-GAN might prove useful in other areas. Simulating various effects for video footage taken from self-driving cars could help them drive under different conditions.

“Such effects might be useful in developing self-driving cars that can navigate at night or in bad weather, Bansal said. These videos might be difficult to obtain or tedious to label, but its something Recycle-GAN might be able to generate automatically. ®

More about


Send us news

Other stories you might like