Is that you, T-1000? No, just a lil robot that can mimic humans on sight

Don't worry, it's not as terrifying as it sounds


Video Boffins have taught a robot how to imitate the way someone handles objects after watching them just once.

While humans and animals are intelligent enough to mimic simple movements they've only just seen, this is beyond today's relatively dumb software. We're nowhere near T-1000 Terminator series levels, yet.

Researchers from the University of California, Berkeley, in the USA, have made some progress on this front by teaching code controlling a robot arm and hand to perform three tasks: grabbing an object and placing it in a specific position; pushing an object; and pushing and pulling an object after seeing the same action performed by a human arm.

Think picking up stuff, such as a toy, and placing it on a box, pushing a little car along a table, and so on.

The technique, described in a paper out this week, has been dubbed “one-shot imitation.” And, yes, it requires a lot of training before it can start copycatting people on demand. The idea is to educate the code to the point where it can immediately recognize movements, or similar movements, from its training, and replay them.

A few thousand videos depicting a human arm and a robot arm completing movements and actions are used to prime the control software. The same actions are repeated using different backgrounds, lighting effects, objects, and human arms to increase the depth of the machine-learning model's awareness of how the limbs generally operate, and thus increase the chances of the robot successfully imitating a person on the fly.

Adapts

Chelsea Finn, a PhD student, and Tianhe (Kevin) Yu, an undergraduate student, both at the UC Berkeley Artificial Intelligence Research group, explained to The Register on Wednesday: “The human demos allow [the robot] to learn how to learn from humans. Using the human demos – just a video of human performing the task – the robot adapts to the task shown in the demonstration."

The training videos are converted into sequences of still images and fed into a convolutional neural network that maps the pictured actions to the possible movements that can be performed by the robot arm and its claw, so that it builds up an understanding of how to position itself to imitate movements caught on camera. It also learns the features of objects, such as colors and shapes, so that it knows how to identify and grasp them.

Crucially, the robot should be able to cope with new objects it hasn't seen during training; simply watching a person handle an arbitrary thing should be enough for it to twig how it should move its joints to pick up and move the item in an identical fashion.

It learns via a process called meta-learning. This is not the same as supervised learning, which is typically used in deep-learning research and involves training systems to perfect a narrow, single task and testing the software by giving it an example that it hasn’t seen before.

“Meta-learning is learning to learn a wide range of tasks quickly and efficiently. By applying meta-learning to robotics, we hope to enable robots to be generalists like humans, rather than mastering only one skill,“ Finn and Yu said. “Meta-learning is particularly important for robotics, since we want robots to operate in a diverse range of environments in the real world.”

“In essence, the robot learns how to learn from humans using this data. After the meta-training phase, the robot can acquire new skills by combining its learned prior knowledge with one video of a human performing the new skill,” they, and their fellow academics, added in their paper.

Meta-training to meta-testing

After the robot has been trained, it can use inference to imitate a human after watching a clip it hasn’t seen before. You can see the robot in action here:

Youtube Video

At first, the movements between the human and the robot may look slightly different. That's because the robot may not have picked up on subtle or minute hand and finger gestures, or be thrown off by the lighting and background. However, the overall task is completed in pretty much the same way.

The robot arm can’t learn a motion completely from scratch on demand: it needs to have seen something similar during training. It manages to push, place, and pick up the right objects over 70 per cent of the time, though, during tests. There are a few failure cases, where it fails to choose the right object or motion.

It’s also more likely to fail to copy humans when the video depicts a new background, which shows that the robot's brain is somewhat preoccupied by patterns in its environment that are not particularly important to the task at hand.

Deep learning is data hungry, and the researchers reckon collecting more of it using a diverse range of backgrounds during training will reduce the failure rate. There were also a number of motion faults that occurred for all backgrounds, so the learning algorithms controlling the robot also have to improved.

The team believes experiments like these will help refine robots that have to select the correct product or other object from a collection of things. At the moment, the team uses three different models of robot arms for each task. They hope to integrate these all into one model, so that a single robot can perform all the different chores as well as increasing the complexity of the tasks. ®

Similar topics


Other stories you might like

  • AI with an improvisational streak is under development
    Robots need to learn to adapt to chaotic humans, says German researcher

    A German doctoral student's research is moving us ever closer to an AI skill that, as of yet, has been unrealized: improvisation.

    According to Sweden's Chalmers University of Technology, robots don't work the same way. They need exact instructions, and imprecision can disrupt a whole workflow. That's where Maximilian Diehl comes in with his research project that aims to develop a new way of training AIs that leaves room to operate in changeable environments.

    In particular, Diehl is concerned with building AIs that can work alongside people and adapt to the unpredictable nature of human behavior. "Robots that work in human environments need to be adaptable to the fact that humans are unique, and that we might all solve the same task in a different way," Diehl said.

    Continue reading
  • China rolls out bots to enforce ‘temporary closed-off management’ of Shanghai
    Drones, delivery-bots and robo-sprayers at work in locked-down megacity

    State-controlled media in China is proudly reporting the use of robots to facilitate the “temporary closed-off management” of Shanghai, which has experienced a new surge of COVID.

    The city of 26 million plus residents has been locked down as cases reportedly surge past the 13,000 mark each day, a new high for the city and a level of infection that China will not tolerate under its zero COVID policy. City authorities have quickly created 47,000 temporary hospital beds and increased capacity to four million tests each day. All residents have been required to take a test.

    Robots are helping to enforce the lockdown. Police have employed “drones equipped with a broadcasting system to patrol key areas.” The craft “publicize latest news and anti-pandemic prevention and control measures to the local communities." Which looks and sounds like this.

    Continue reading
  • Boston Dynamics' latest robot is a warehouse workhorse
    When does this thing get to unionize?

    Robotics company Boston Dynamics is making one of its latest robots more generally commercially available: a mobile, autonomous arm called Stretch.

    Stretch is outfitted with a vacuum gripping arm able to move a wide variety of box types and sizes, up to 50 pounds (≈22.7kg). Its footprint is about that of a warehouse pallet, and it can move around on its own, which Boston Dynamics said makes it a good fit for companies trying to automate without building a whole new factory.

    "Stretch offers logistics providers an easier path to automation by working within existing warehouse spaces and operations, without requiring costly reconfiguration or investments in new fixed infrastructure," Boston Dynamics said this week.

    Continue reading
  • Japanese startup makes baby carrier-style sling for 'Love Robots'
    Fittings open on Saturday, to make it easier to take motorized pals with you wherever you go

    Japanese startup Groove X will on Saturday stage fittings for a wearable sling - somewhat akin to baby carriers - designed to let owners of "Love Robots" more easily carry the machines wherever they go.

    The robots in question are called LOVOTs – a name that combines the words Love and Robot to reflect the creations' intended role as an object of domestic affection for residents of Japan that fancy cuddling up to a furry machine. LOVOTs roll around on wheels and have a cylindrical object on their head containing a camera and other sensors.

    The fitting session will take place in the newly expanded LOVOT Studio – a store in downtown Tokyo that this week opened a space in which LOVOT owners can congregate, with their robots, to enjoy each other's company among like-minded friends.

    Continue reading

Biting the hand that feeds IT © 1998–2022