This article is more than 1 year old

Facebook pulls plug on mind-reading neural interface that restored a user's speech

Stroke victim 'talks' for first time in 16 years, but The Social Network can't see a route to market

Updated Facebook is abandoning a project to develop a brain-computer interface (BCI), even as the researchers it funded have showcased the device helping someone with severe speech loss communicate with nothing more than thought.

In a paper published in The New England Journal of Medicine Facebook-funded researchers showed how a "neuroprosthesis" could be used to restore speech in a subject who had lost the ability more than 16 years ago, following a stroke – simply by picking up the intent to speak and transcribing it into words.

"To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralysed and cannot speak," said Edward Chang, chair of Neurosurgery at the University of California San Francisco and the study's lead author. "It shows strong promise to restore communication by tapping into the brain's natural speech machinery.

"My research team at UCSF has been working on this goal for over a decade. We've learned so much about how speech is processed in the brain during this time, but it's only in the last five years that advances in machine learning have allowed us to get to this key milestone. That combined with Facebook's machine learning advice and funding really accelerated our progress."

Youtube Video

Facebook has been working with Chang on melding man and machine for some time, showcasing its funding of research into a system for turning thoughts into text back in 2019 – to the consternation of some. The work followed a 2017 proclamation that the company was to build exactly such a device by 2019, so it at least can't be faulted for sticking to the roadmap.

"To see this work come to fruition has been a dream for the field for so long, and for me personally," said Emily Mugler, neural engineering research manager at Facebook Reality Labs. "As a BCI scientist, a core pursuit for me throughout my career has been to demonstrate that the neural signals that drive speech articulation can be decoded for a more efficient BCI that can be used for communication.

"These results have unlocked a lot of possibilities for assistive technologies that could significantly improve quality of life for those with speech impairment."

Despite this success, Facebook has apparently lost interest. "While we still believe in the long-term potential of head-mounted optical BCI technologies," the company announced, "we've decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market: wrist-based devices powered by electromyography."

In this project, a wrist-based sensor will track signals in the wearer's motor neurons and translate them into machine input. "In the near term, these signals will let you communicate with your device with a degree of control that's highly reliable, subtle, personalisable, and adaptable to many situations," the company boasted.

"As this area of research evolves, EMG-based neural interfaces have the potential to dramatically expand the bandwidth with which we can communicate with our devices, opening up the possibility of things like high-speed typing."

The problem doesn't appear to be a lack of success, but simply a route to market: Facebook is looking to release an actual product, and seems to believe that well-proven, low-cost, and non-invasive wearables built around EMG sensors are the way forward.

"We can confidently say, as a consumer interface, a head-mounted optical silent speech device is still a very long way out," Facebook research director Mark Chevillet, who was in charge of the BCI project, told MIT Technology Review. "Possibly longer than we would have foreseen."

Facebook was far from alone in trying to unlock the secret to adding a USB port to the side of someone's head. DARPA threw cash at five companies working on brain-computer interfaces back in 2017, and Silicon Valley bad boy turned cryptocurrency market manipulator Elon Musk's Neuralink unveiled its neural lace tech in 2019 before sticking it in a pig a year later.

Facebook's full report on the work, its upcoming EMG wristband, and other research projects is on the Tech@Facebook blog.

Its efforts aren't completely wasted, either: the company has released its BCI software, LabGraph, under the permissive MIT licence on GitHub, and has pledged to share the hardware designs "with key researchers and other peers."

We have asked Facebook to comment further. ®

Updated at 14:22 UTC on 15/07/2021 to add:

A Facebook spokesperson told El Reg: "Speech was the focus of our BCI research because it’s inherently high bandwidth — you can talk faster than you can type. But speech isn’t the only way to apply this research — we can leverage the BCI team’s foundational work to enable intuitive wrist-based controls, too.

"Given this, we are no longer pursuing a research path to develop a silent, non-invasive speech interface that would allow people to type just by imagining the words they want to say. Instead of a speech-based neural interface, we’re pursuing new forms of intuitive control with EMG."

More about


Send us news

Other stories you might like