This article is more than 1 year old
DeepMind: Get a load of our rat-like AI. 'Ere, look. It solves mazes and stuff
That's nice and all, but, er, a brain it ain't, no matter what the marketing suggests
DeepMind researchers have developed a neural network loosely modeled on mammalian brains to craft an artificially intelligent program capable of navigating through mazes.
The results were published in a paper in the journal Nature on Wednesday. DeepMind seems to think its work lifts the lid on how brains really work. We think not.
The team used a grid-cell neural network made up of three layers: a recurrent layer, a linear layer, and an output layer. It was trained by observing the paths of simulated rats shuffling about in a small 2D enclosure.
The virtual rats traced the edges of the square-shaped or circular enclosure without ever touching the walls. The grid-cell neural network received information about the direction and distance covered by the model rats over time as inputs, and was trained to predict the position of a simulated rat in its environment as output.
In 2005, Edvard Moser and May-Britt Moser discovered that when real rats were exploring mazes, the neurons in their brain's hippocampus (the bit that does emotions and memory) fired in hexagonal patterns. In 2014, together with John O’Keefe, they were awarded the Nobel Prize in Physiology or Medicine for discovering how the hexagonal neurons, dubbed grid cells, helped animals understand their position in space like an “internal GPS system.”
Magical, allegedly
The researchers at DeepMind also found that these hexagonal patterns “spontaneously emerged” in about a quarter of the units in the linear layer of its AI's grid-cell network.
It’s an interesting feature, but it shouldn’t be too surprising since similar results have previously been reported. A very similar paper (PDF) accepted at this year’s International Conference of Learning Representations conference (ICLR) reported pretty much the same results. An even older paper written by a trio of neuroscientists appeared in PLOS Biology in 2006.
Grid-cell neurons have been linked to the idea of “vector-based navigation" systems in biological brains. It appears these neurons help animals – including humans – work out the relative distance between two points by mapping the space between them as a series of grid cells. In effect, we use these grid-cell neurons to work out the gap between things, and thus our position in the world around us.
“Experimental evidence for the direct involvement of grid representations in goal-directed navigation is still lacking,” the DeepMind paper stated. Thus, the researchers decided to take their reinforcement-trained AI agent, pop it in various mazes, and saw that it could navigate through the passages to the goal. So far so good.
Shortcuts
Crucially, when they performed the experiment without the AI's grid-cell neurons, the bot’s performance worsened. It was less effective at reaching the goal and did not find shortcuts. The paper by the DeepMind team is a little confusing, and it’s not quite clear what it is about the presence of the grid-cell neurons that makes AI agents and animals better at navigating their environments. We have asked DeepMind for more details.
The results do seem to support the idea that grid-cell neurons are useful for navigation, but it hasn’t really provided any new insights about how they nor how the brain works. Undeterred, DeepMind billed the work as "compelling."
Any claims that neural-network software can unlock our understanding of neuroscience should be taken with a pinch of salt, considering even neuroscientists aren’t quite sure how the brain works, and computer scientists certainly don’t fully understand how neural networks work either.
“The work showcases the potential of using artificial agents actively engaging in complex behaviours within realistic virtual environments to test theories of how the brain works,” Deepmind insisted in a blog post.
Well, don't believe everything you read on the internet. ®