This article is more than 1 year old
DeepMind AI reacts to the physically impossible like a human infant
Adding assumptions about objects better than learning from scratch, claims researcher
DeepMind has looked to developmental psychology to help AI gain a basic understanding of the physical world.
Real-world physics are difficult for AIs to grasp when asked to start from scratch with only training data to guide them. But researchers have demonstrated babies as young as five months are surprised if they are shown a physically impossible event, such as a toy suddenly disappearing, implying they gain some intuitive physical understanding at an early age.
DeepMind researcher Luis Piloto and his colleagues developed an AI, dubbed PLATO, which adopts the thesis that objects play a central role in the representation and prediction of the physical world around us.
They then used video training data to showing it videos of many simple scenes to improve PLATO's performance. PLATO reacted similarly to how a baby expresses surprise when seeing an impossible event, and learning effects were seen after 28 hours of videos, according to a paper published in Nature today.
Piloto explained that PLATO uses objects at all stages of processing: representing visual inputs as a set of objects; reasoning about interactions between objects; and producing outputs that are predictions on a per-object basis.
- AI inventors may find it difficult to patent their tech under today's laws
- More and more CS students are interested in AI – and there aren't enough lecturers
- Vodafone picks Google for MLops project that self-serves models in minutes
- Meta's AI translation breaks 200 language barrier
"We found that PLATO passed the test in our physical concepts dataset," he said. "But when we trained flat models that were as big or even bigger than PLATO – but didn't have object-based representations – we found that they didn't actually pass all of our tests suggesting that objects really are a critical part of physical understanding."
In an accompanying article, Susan Hespos, professor of psychology at Northwestern University, and Apoorva Shivaram of Western Sydney University, Australia, said the finding confirmed understanding of the development of perception in humans as well as advancing AI.
"The findings indicate that visual animations can account for some intuitive physics learning, but not enough to account for what we see in infants. In other words, the computational models require some principled knowledge about how objects behave and interact to match the level of learning that is commonly seen in infants," they said.
While a wealth of AI applications might benefit from a bit of real-world physics – self-driving cars anyone? – the authors stressed that the finding was really about helping other studies in AI.
Piloto told journalists: "I think physical understanding is pervasive. It's hard to talk about specific applications because we think it's a little bit more general than that. It really kind of depends on what researchers want to do with it. The point of this work is to establish a benchmark so that people [realize] how well their models understand the physical world. We don't have a view on what they want to do beyond that point." ®