Neurosurgery Assistant Professor Frank Willett, PhD, and his teammates are using brain-computer interfaces, or BCIs, to help people whose paralysis renders them unable to speak clearly.
The brain’s motor cortex contains regions that control movement – including the muscular movements that produce speech. A BCI uses tiny arrays of microelectrodes (each array is smaller than a baby aspirin), surgically implanted in the brain’s surface layer, to record neural activity patterns directly from the brain. These signals are then fed via a cable hookup to a computer algorithm that translates them into actions such as speech or computer cursor movement.
To decode the neural activity picked up by the arrays into words the patient wants to say, the researchers use machine learning to train the computer to recognize repeatable patterns of neural activity associated with each “phoneme” – the tiniest units of speech – then stitch the phonemes into sentences.
Willett and his colleagues have previously demonstrated that, when people with paralysis try to make speaking or handwriting movements (even though they cannot, because their throat, lip, tongue and cheek muscles or the nerve connections to them are too weak), a BCI can pick up the resulting brain signals and translate them into words with high accuracy.
Recently, the scientists took another important step: They investigated brain signals related to “inner speech,” or language-based but silent, unuttered thought.
Willett is the senior author, and postdoctoral scholar Erin Kunz, PhD, and graduate student Benyamin Meschede-Krasa are the co-lead authors of a new study about this exploration, published Aug. 14 in Cell. (Researchers at Emory University; Georgia Institute of Technology; the University of California, Davis; Brown University; and Harvard Medical School were also involved in the study.)
Willett, the co-director of Stanford’s Neural Prosthetics Translational Laboratory, provided insight on the study’s findings and implications.
What is “inner” speech? And why would a BCI/thought-decoding system that could accurately interpret inner speech be better than one that decodes only attempted speech?
Inner speech (also called “inner monologue” or self-talk) is the imagination of speech in your mind – imagining the sounds of speech, the feeling of speaking, or both. We wanted to know whether a BCI could work based only on neural activity evoked by imagined speech, as opposed to attempts to physically produce speech. For people with paralysis, attempting to speak can be slow and fatiguing, and if the paralysis is partial, it can produce distracting sounds and breath control difficulties.
What did you learn from your efforts to design and employ decoding systems that could discern inner speech?
We studied four people with severe speech and motor impairments who had microelectrode arrays placed in motor areas of their brain. We found that inner speech evoked clear and robust patterns of activity in these brain regions. These patterns appeared to be a similar, but smaller, version of the activity patterns evoked by attempted speech. We found that we could decode these signals well enough to demonstrate a proof of principle, although still not as well as we could with attempted speech. This gives us hope that future systems could restore fluent, rapid, and comfortable speech to people with paralysis via inner speech alone.
Does the system’s potential ability to accurately decode unspoken, silent, inner speech raise issues that hadn’t accompanied previous advances in BCI/decoding software technology?
The existence of inner speech in motor regions of the brain raises the possibility that it could accidentally “leak out”; in other words, a BCI could end up decoding something the user intended only to think, not to say aloud. While this might cause errors in current BCI systems designed to decode attempted speech, BCIs do not yet have the resolution and fidelity needed to accurately decode rapid, unconstrained inner speech, so this would probably just result in garbled output. Nevertheless, we’re proactively addressing the possibility of accidental inner speech decoding, and we’ve come up with several promising solutions.
For people with paralysis, attempting to speak can be slow and fatiguing, and if the paralysis is partial, it can produce distracting sounds and breath control difficulties. ”
It’s worth pointing out that implanted BCIs are not yet a widely available technology and are still in the earliest phases of research and testing. They’re also regulated by federal and other agencies to help us to uphold the highest standards of medical ethics.
What are a couple of the steps that can address this privacy concern?
For current-generation BCIs, which are designed to decode neural activity evoked by attempts to physically produce speech, we demonstrated in our study a new way to train the BCI to more effectively ignore inner speech, preventing it from accidentally being picked up by the BCI. For next-generation BCIs that are intended to decode inner speech directly – which could enable higher speeds and greater comfort – we demonstrated a password-protection system that prevents any inner speech from being decoded unless the user first imagines the password (for example, a rare phrase that wouldn’t otherwise be accidentally imagined, such as “Orange you glad I didn’t say banana”). Both of these methods were extremely effective at preventing unintended inner speech from leaking out.
What lies ahead? How far off is practical realization of this approach? Your next steps?
Improved hardware will enable more neurons to be recorded and will be fully implantable and wireless, increasing BCIs’ accuracy, reliability, and ease of use. Several companies are working on the hardware part, which we expect to become available within the next few years. To improve the accuracy of inner speech decoding, we are also interested in exploring brain regions outside of the motor cortex, which might contain higher-fidelity information about imagined speech – for example, regions traditionally associated with language or with hearing.