This article is more than 1 year old

OK, smarty pants AI. You can beat us humans at video games. But how about real-world puzzles like Jenga? Oh, oh no

Yes, let's distract killer neural networks with boredom-killing toys

Vid Here’s a robot you could take down the pub with you. It won’t bore you to death with politics and sport, nor add to your round, though it will kill time playing Jenga with you.

Jenga needs no introduction, other than to say it requires dexterity and spatial awareness. Both of these things are pretty innate in humans but not so much for our metal counterparts. A robot built by a team of researchers at MIT in America has two prongs for fingers, sensors in its wrist, and a camera for eyes.

As the AI-powered bot surveys the tower, one of its prongs is told by software to poke a block, which sends feedback to its sensor to work out how movable that particular block is. If it’s too stiff, the robot will try another block, and keep pushing in millimetre increments until it has protruded far enough to be removed and placed on top of the tower.

Prodding until you find a suitable block to push may seem like cheating, but, well, given the state of 2019 so far, we'll take a rule-stretching robot any day. Here it is in action...

Youtube Video

“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces,” said Alberto Rodriguez, an assistant professor of mechanical engineering at MIT, this week.

"It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks. This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

Ford's 'Robutt'

Q. How exactly do you test car seats? A. With this sweaty 'robutt' that twerks for days and days

READ MORE

Teaching a robot to play Jenga using reinforcement learning (RL), a technique in machine learning traditionally used to teach agents to play games, would require too much training data, according to the results published in a paper in Science Robotics.

Instead, the researchers favored a different set of algorithms that helped the robot build an abstract view of the relationship between the positioning of a block and how it feels to touch. After about 300 demonstrations in training, it learned how to focus on the blocks that are easy to push.

It takes about two seconds to make a move, and it can extract about 21 blocks in one continuous run, which is just under half of the total game set, from a freshly built tower before it becomes too unstable to play or falls over. So, it’s not too shabby an opponent. Rodriguez reckons the same sort of skills could be used by robots to do things like separating garbage or assembling products.

“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” he said. “Learning models for those actions is prime real-estate for this kind of technology.” ®

More about

TIP US OFF

Send us news


Other stories you might like