This article is more than 1 year old
Robot brains? Can't make 'em, can't sell 'em
Why dopes still beat boffins
The current generation of "consumer robots" is driven mostly by robot-love: people enjoy things which move around on their own, especially if they can build or tinker with the gadgets themselves. That much became clear at a recent symposium on Robots, which I described here last month. The consumer robot business today is manned by avid tinkerers because there is neither a technology for autonomous gadgets, nor a business model to support them even if they did exist.
Robot bacterium?
At the symposium, your reporter posed the following question to the panel:
"The three commercial presenters offer consumer products with pre-programmed behaviors about equal to those of a bacterium. The lone researcher demonstrated fancier computer vision, but it took a dozen graduate students a year to develop, and is still extremely simple and pre-programmed. When can we expect our robots to have the sophistication, responsiveness, and robustness of - say - a mouse?"
No one answered the question, of course, but the most enlightening response came from Colin Angle, CEO of iRobot (which manufactures the autonomous vacuum-cleaning Roomba):
"The Roomba is actually very sophisticated: it has a multi-threaded operating system, and was built by over a hundred computer scientist and a dozen PhDs," he replied.
He's right, of course. The Roomba really is a sophisticated piece of computer engineering - but sophistication by computer standards does not translate to biological sophistication. I was tempted to respond that bacteria are also multi-threaded - they can grow and eat and reproduce and move all at the same time, too. Unfortunately, Angle's PhDs have the unenviable task of reproducing in silicon what Nature has spent a billion years on.
iRobot's Roomba is a great example of how very hard real-life robotics is. The task for the disk-shaped rolling vacuum seems simple: roam around a room, vacuum up dirt, and come back to the dock in time to recharge. But to accomplish that task, the Roomba needs infra-red locators and "virtual walls" spread around the room to keep it from getting lost elsewhere in the house.
Perhaps the hardest task is to avoid "getting stuck": not just physically getting wedged somewhere, but running in circles or vacuuming the same region over and over. Merely detecting "stuck-ness" from its sensor data required vast amounts of trial-and-error programming, as did delineating how to recover. Meanwhile, the iRobot corporation has been obliged to simplify the hardware mercilessly, so that the whole package of motors/wheels/vacuum/software is affordable - say below $200 - an economising which leaves little room to develop sophisticated planning and "intelligence."
Moore's Law for gears
Angle's clever lament on the business of building such gadgets - "Moore's law doesn't apply to gears" - masks a deeper truth. What he means is that mechanical or hardware costs have not dropped as fast as chips, memory, and bandwidth, so that the robotic "industry" has not had the same exponential growth as communications and computation. He could also mean that selling physical gadgets entails much more than simply assembling them; it means repairing them and offering warrantees (an obligation that click-wrap software has wriggled out of), and even ensuring the safety of customers from potential robots-run-amok.
The truth he didn't mention is that hardware is not the reason we have no intelligent robots. In fact motors, sensors and even processors are very cheap now, and a desktop computer core with a video input and a few motorized wheels could be mass-produced for a few hundred dollars. But the software to animate it is quite literally priceless, because it doesn't yet exist. Worse, no one even knows the principles on which to write it.
Here's why.
Missing the basics
Of course people can write software specialized for specific hardware to to a specific task (like the Roomba), but such programs won't generalize to new hardware, sensors, and environments: no one yet has software which "learns" the way brains do, mostly because science doesn't even know what brains do. If we don't understand how we (or even mice) interact gracefully with an uncertain world, how could we expect to program anything else to?