I like BigGANs but their pics do lie, you other AIs can't deny

Gaze at the computer-created horror of Dogball

22 Reg comments Got Tips?

Pics Images generated by AI have always been pretty easy to spot since they are always slightly odd to the human eye, but it’s getting harder to differentiate what’s real and fake.

Researchers from DeepMind and Heriot-Watt University in the UK have managed to significantly boost the quality of images simulated by a generative adversarial network (GAN) by increasing the size of the machine learning model, which they dubbed BigGANs.

The best results, including pictures of a brown dog with floppy ears, a island landscape, a butterfly, and a cheeseburger, look like real photos at first glance.

BigGAN

Image credit: Brock et al.

Keep staring, however, and you will begin to see some slight inconsistencies. The dog’s eyes are glazed over and there is a weird patch that doesn’t belong to the butterfly’s wing. These are still the best images created by GAN, according to the results published on arXiv late last week.

GANs are made up of two separate neural networks working against each other. The generator network produces images and the discriminator network tries to determine if its real or fake. During the training process, the generator learns how to fine-tune the process to create better images to bypass the discriminator.

The trick to getting more realistic results is to make everything bigger. “We demonstrate that GANs benefit dramatically from scaling, and train models with two to four times as many parameters and eight times the batch size compared to prior art,” the paper said.

shocked_tv

The eyes don't have it! AI's 'deep-fake' vids surge ahead in realism

READ MORE

BigGAN is trained on ImageNet, a popular dataset used for image classification tasks containing millions of images of different objects. The one that performs best has a batch size of 2,048, meaning it slurps up that number of images from the dataset during each training iteration. Neural networks go through many cycles of training to process the whole dataset several times.

It also has over 158 million parameters - properties describing the images that can be learned during the training process - and required 128 Google TPU3 Pods to train a model in about one to two days.

Another technique, the researchers call the “truncation trick,” forces the generator to create images that are more similar to the training dataset, making them more realistic.

“The output of the generator is controlled by how much variability its input has. Our technique makes the output less variable, but higher quality, by reducing the variability of the input,” Andrew Brock, a PhD student at the Edinburgh Centre of Robotics, at Heriot-Watt University, told The Register.

Using AI to create fake content that’s increasingly realistic has raised concerns. There are numerous cases where GANs have been used to create images to mimic someone else’s face. Pictures of politicians like Barack Obama and Donald Trump have being manipulated to make them say things they haven’t said. Internet perverts also used a similar technology to paste their favourite actresses’ faces onto the bodies of porn actors..

Brock told El Reg, he is also worried about how GANs can be used maliciously. “It's part of why I chose to focus on more general image modeling rather than faces - it's a lot harder to use images of Dogball for political or unethical purposes than it is to use an image of another person.”

dogball

Dogball! A cross between a some kind of dog and a tennis ball. Image credit: Brock et al.

GANs have helped developers create art and although they might not seem to have much practical importance, they’re interesting to study.

“Neural nets which can generate convincing samples have to learn the rich structure that underlies our complex visual world - you have to ‘understand’ something in order to draw it. If we can build models that understand that thoroughly then there's a lot of interesting things we can do with the representations they learn,” he added. ®


Keep Reading

Twitter: Our image-cropping AI seems to give certain peeps preferential treatment. Solution: Use less AI

Let's just go back to human-selected cropping, eh?

AI in the Enterprise: How can we make analytics and stats sound less scary? Let's call it AI!

Register Debate New names for old recipes

Got a problem with trust in AI? Just add blockchain, Forrester urges. Then bust out the holographic meetings. Welcome to the future

It takes 'grit' to send in a holograph to meetings instead of struggling with mute buttons yourself...

Linux Foundation projects on AI and data merge – because one of these concepts simply can't exist without the other

Open Source Summit Europe Combined org hoping to attract more industry support

AI in the enterprise: AI may as well stand for automatic idiot – but that doesn't mean all machine learning is bad

Register Debate Is AI just a rebrand of yesterday's dumb algorithms? We present the argument against this motion – and don't forget to vote

Microsoft builds image-to-caption AI so that your visually impaired coworkers can truly comprehend your boss's PowerPoint abominations

Better-than-before code to make Office more accessible

AI snatches jobs from DJs and warehouse workers, plus OpenAI and PyTorch sittin' in a tree, AI, AI, AI for you and me

Roundup January's other AI news summarized for you... by a human... honest

Samsung combines 5G, AI, drones and cloud in conspiracy ... to ease network maintenance costs

To save telco workers from climbing the greasy pole as networks get denser

Biting the hand that feeds IT © 1998–2020