The US National Science Foundation has awarded a $1m grant to researchers in the US who want to put speech recognition on a chip, a move the project's proponents claim will revolutionise the way we communicate.
Rob Rutenbar, Jatras professor of electrical and computer engineering and computer science at Carnegie Mellon University, will lead the project. The research will be conducted in tandem with scientists from the University of California in Berkeley.
Currently, speech recognition takes place at the software level, and its precision varies enormously, depending on what you want the system to do. Matching input to a specific set of expected words is relatively trivial, but capturing the full meaning of a conversation in a noisy room is much, much harder.
Rutenbar sums it up thus: "I can ask my cell phone to 'Call Mom', but I can't dictate a detailed email complaint to my travel agent."
To process arbitrary speech, you need a very powerful and power-hungry processor. "But we can't put a Pentium in my cell phone, or in a soldier's helmet, or under a rock in a desert," Rutenbar argues. "The batteries wouldn't last ten minutes."
And this is where this research project comes in, because to really crack speech recognition, the researchers say, we have to go to dedicated silicon. The team's goal is to design a new silicon architecture, powerful enough to crunch the numbers, but between 100 and 1000 times more efficient that a normal computer chip.
The NSF doesn't hand out million-dollar grants because people want to have a chat with their PCs, or send email without a keyboard. The applications the researchers have in mind are much more serious, and tend towards solving problems faced by emergency services and security organisations.
"Imagine if an emergency responder could query a critical online database with voice alone, without returning to a vehicle, in a noisy and dangerous environment," Rutenbar said.
The researchers expect that the architecture will be ready within two or three years. ®