This article is more than 1 year old
AI agents can copy humans to get closer to artificial general intelligence, DeepMind finds
Google’s AI offshoot finds copy-cat robots capable of aping living mentors
A team of machine learning researchers from Google's DeepMind claim to have demonstrated that AI can acquire skills in a process analogous to social learning in humans and other animals.
Social learning — where one individual acquires skills and knowledge from another by copying — is vital to the process of development in humans and much of the animal kingdom. The Deepmind team claim to be the first to demonstrate the process in artificial intelligence.
A team led by Edward Hughes, a staff research engineer at Google DeepMind, looked to address some of the limitations in AI agents acquiring new skills. Teaching them new capabilities from human data has relied on supervised learning from large numbers of first-person human demonstrations, which can soak up a lot of lab time and money.
Looking to human learning for inspiration, the researchers sought to show how AI agents could learn from other individuals with human-like efficiency. In a physical simulated task space called GoalCycle3D — a sort of computer-animated playground with footpaths and obstacles — they found AI agents could learn from both human and AI experts across a number of navigational problems, even though they had never seen a human or, we assume, had any idea what one was.
A paper published in peer-reviewed open access journal Nature Communications this week said the team was able to use reinforcement learning to train an agent capable of identifying new experts, imitating their behavior, and remembering the acquired knowledge over the course of a few minutes.
"Our agents succeed at real-time imitation of a human in novel contexts without using any pre-collected human data. We identify a surprisingly simple set of ingredients sufficient for generating cultural transmission and develop an evaluation methodology for rigorously assessing it. This paves the way for cultural evolution to play an algorithmic role in the development of artificial general intelligence," the study said.
The researchers looked forward to others in the field of AI applying the findings more broadly to show how cultural evolution — the development of skills across a number of generations in a community — could be developed in AI.
- Amazon says it's ready to train future AI workforce
- Doom developer John Carmack thinks artificial general intelligence is doable by 2030
- X may train its AI models on your social media posts
- Microsoft chases Google with ChatGPT-powered Bing
"Bringing all of these components together, it would be fascinating to validate or falsify the hypothesis that cultural evolution in a population of agents may lead to the accumulation of behaviors that solve an ever-wider array of human-relevant real-world problems," the paper said.
"One might consider a line of experiments that investigated cultural accumulation across several 'generations' of humans and AIs in a laboratory environment, drawing comparisons between the different populations or analysing the effects of mixing human and AI participants in a population. We look forward to fruitful interdisciplinary interaction between the fields of AI and cultural evolutionary psychology in the future," the researchers said. ®