This article is more than 1 year old

Read lips? Siri wants to feel them, according to fresh Apple patent

We make movements when we talk, and gyro, accelerometer and sensor tech could improve speech recog

Siri's ability to recognize speech may be getting a boost through the addition of lip-reading – or, more appropriately, lip-feeling – technology, according to a newly published Apple patent.

The patent, Keyword Detection Using Motion Sensing [PDF], sees Apple turning not to new forms of artificial intelligence or visual speech recognition, but rather to the plentiful sensors embedded in its many devices. Those sensors could be configured to recognize certain vibrations and head movements as matching certain words or phrases, and in turn use them in the same way as audio recognition of wake words like "Hey Siri." 

Motion sensors in devices "may detect muscle movement, vibrations, head motions, and the like and output a stream of data representing the specific force, angular rate, and/or orientation created by said motions," Apple boffins wrote in their patent filing. These sensors could be embedded in wearable devices like a pair of AirPods, "smart glasses, or the like."

Apple even describes the use of motion sensing as an alternative to audio sensors that have to remain active and continually record a buffer of sound in anticipation of the wake word, which could serve to improve battery life and offer the side-effect of improved privacy, Apple claims.

"The audio sensor may remain in an inactive state (e.g., in a low-power, idle, or powered-off mode) while [the] user speaks voice input, such that no audio data corresponding to voice input is produced," the patent suggests. 

That doesn't mean a motion-sensing digital assistant wouldn't still need to process a ton of data, which Apple also admits in the patent. In order to match vibrations and head movements to certain words, Apple said it would need to either train a small sample of words (akin to how Siri is trained on a new iDevice) and rely on a generalized corpus of similar data from other users, or would need a considerable amount of data from users who would likely need it to be "listening" for quite a while in order to pair sensor readings with audio signals.  

Data protection much?

"This gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person," Apple said, noting that such data could include location information, demographic data, telephone number, physical address and email address, health data "or any other identifying or personal information."

As such, Apple admits, "the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data," and would allow users to opt out of such a feature if it ever becomes available. 

As for the possible accessibility features that come to mind when one hears "lip-reading iPhone," it Appears Apple is purely focused on improving speech recognition for iPhone users themselves, and not the people they may be trying to listen to in a crowded room. Apple makes no mention of using cameras to read lips in the patent, and the company didn't respond to our questions asking if there was any plan to use this technology to help people who are hard of hearing. 

Whether the tech will ever come to fruition is, as usual, unclear, especially given Apple's history of filing lots of patents that never make it past the concept stage. The feature may be a welcome one for Apple users concerned that the iMaker is slipping behind its competition in terms of artificial intelligence capabilities, including speech recognition in Siri and the digital assistant's other limited features, which this reporter can attest to. 

It's also worth noting that this is simply a patent publication, not the granting of an actual patent, which means Apple hasn't been granted any exclusive rights to the concept of using motion sensors, accelerometers and gyroscopes to recognize speech without listening to it. 

Writing in anticipation of Apple's Q3 earnings call yesterday, Apple watcher Ming-Chi Kuo said he believed there was no sign that Apple would be integrating more AI into additional products in the next year, so don't expect Siri to be intimately recording your lips' every move anytime soon. ®

More about

TIP US OFF

Send us news


Other stories you might like