This article is more than 1 year old
Machine-learning model creates creepiest Doctor Who images yet – by scanning the brain of a super fan
Oodkind look even more terryfing
AI researchers have attempted to reconstruct scenes from Doctor Who by using machine-learning algorithms to convert brain scans into images.
The wacky experiment is described in a paper released via bioRxiv. A bloke laid inside a functional magnetic resonance imaging (fMRI) machine, with his head clamped in place, and was asked to watch 30 episodes of the BBC's smash-hit family sci-fi show while the equipment scanned his brain. These scans were then passed to a neural network.
This machine-learning model, dubbed Brain2pix, predicted the scene from Doctor Who that was watched by the human guinea pig using those scans of his brain activity alone. The AI system can't freely read one's thoughts nor determine any old picture in your head. It is designed and trained to recreate the moment in Doctor Who that was being watched just from the observed brain activity.
Each fMRI brain scan was turned into an array of numbers, or a tensor, using receptive field mapping. This technique "is a way to map very specific brain locations onto the visual space as it tells us which point in the brain is responsible for what pixel that you see in your visual space,” Lynn Le, first author of the study and a PhD student at the Radboud University Nijmegen in The Netherlands, told The Register.
The Brain2pix model took those tensors as input, and outputted a visual image, effectively translating activity in the brain into pixels of what was probably being watched. Here’s an example of what that looks like in practice:
As you can see, the reconstructions are a bit rum. Karen Gillan – who played the role of Amy Pond, Doctor Who’s fictional sidekick in series five to seven – looks more like a terrifying monster or alien from the show. Here are examples of an Ood alien.
Brain2pix was trained using data that pairs a specific Doctor Who clip with its corresponding fMRI scan. That means the generated machine-generated images are likely to be heavily dependent on the human guinea pig's particular brain scans. The model contains a generative adversarial network that recreates the scene, and its attempts are passed to a discriminator network that has to guess whether the machine-learning-made image looks like a real clip from the training data.
If the reconstructed image isn’t quite good enough, the discriminator rejects it and the generator has to try again. Over time, the generator improves and manages to trick the discriminator into believing its images are real.
Experiments that involve converting brain signals into speech or images are often limited in scope. There is significant overlap between the training and testing data, which means you can't really draw too many conclusions from the results and performance. Yet in this trial, Brain2pix was asked to generate images from the brain activity of the viewer as he watched episodes for the first time. As such, the brain scans during these clips were new territory for the software, and it had to figure out what was being seen. The overlap of training and testing data was minimal.

An 'AI' that can diagnose schizophrenia from a brain scan – here's how it works (or doesn't)
READ MOREIt’s difficult to get Brain2pix to transfer what it learned about one participant to another, though. Even if two people watched the same Doctor Who clip, the neural network would probably be unable to reconstruct the images from someone else's brain scans if it wasn’t explicitly trained on them.
Still, the researchers believe their work may prove useful in the future. “First, it allows us to investigate how brains represent the environment, which is a key question in the field of sensory neuroscience,” Le told us.
“Second, it demonstrates a promising approach for several clinical applications. An obvious example is a brain-computer interface which would allow us to communicate with locked-in patients by accessing their brain states.”
The dream is that eventually, some day, neuroprosthetics will get good enough to help restore vision for the blind. “Here the goal is to create percepts of vision in blind patients by bypassing their eyes and directly stimulating their brains. Approaches like Brain2pix can in principle be used to determine what kind of percepts might be evoked as a result of such direct neuronal stimulation,” she added. ®
Editor's note: This article was clarified after publication to make clear that there was little or no overlap in testing and training data: different episodes and seasons were used for testing and training.