Unquiet minds — Harvard Gazette

For years, people with paralysis have used brain-computer interfaces to turn neural signals into actions by thinking about the actions they would like to take: typing words, controlling robotic arms, producing speech. But new research shows that the interfaces could translate not only intended speech, but internal thoughts as well. Sort of.

It’s a major step for people with communications challenges, said Daniel Rubin, an author of the study in the journal Cell.

“Communication is sort of a key part of what we are as people,” said Rubin, an instructor at Harvard Medical School and a neurologist at Mass General. “And so any way that we can help restore communication, we think, is a way that we can improve quality of life.”

The study builds on decades of work from BrainGate, a long-running, multi-institution clinical trial. Early experiments gave people with paralysis the ability to complete different tasks—type letters, move a computer cursor, operate a mechanical arm —using an implanted brain-computer interface (BCI). Participants imagined moving their hands up, down, left or right, while tiny sensors in the motor cortex decoded those intentions. In recent years, researchers wondered if instead of decoding the intended movement of the hand, wrist, and arms, they could instead decode the intended movement of the muscles that we use to talk — the face, mouth, jaw, and tongue. 

The answer is yes — but only thanks to advances in artificial intelligence. 

“It’s a vastly different computational problem to think about decoding speech as compared to decoding hand movements,” Rubin said. 

“It’s a vastly different computational problem to think about decoding speech as compared to decoding hand movements.”

Daniel Rubin

To simplify the challenge, participants attempted to speak preset sentences. Electrode arrays implanted on their motor cortices picked up signals corresponding to 39 English phonemes, or basic speech sounds. Machine learning models then assembled those sounds into the most likely words and sentences. 

“It’s doing it to some degree probabilistically,” said study co-author Ziv Williams, an associate professor at Harvard Medical School and a neurosurgeon at MGH. “For example, if by recording the neurons in the brain, you know that there is a ‘D’ sound and a ‘G’ sound, they’re likely trying to say the word ‘dog.’” 

Once it was determined that algorithms could decode intended speech, investigators — led by the BrainGate team at Stanford University and including Williams, Rubin, and their Harvard/MGH colleague Leigh Hochberg — turned to inner speech. Previous studies suggested that silently rehearsing words activates similar areas of the motor cortex as intended speech, just at a lower signal strength.

In one of several experiments exploring the decoding of inner speech, researchers asked participants to look at a grid of colored shapes — in this case, green circles, green rectangles, pink circles, and pink rectangles. The team hypothesized that when asked to count only the pink rectangles, participants would use their inner speech to count the shapes as they scanned the grid. Over multiple trials, a decoder learned to pick up number words from those unspoken counts. 

But the system ran into trouble when researchers attempted to decode unstructured inner thought. When participants were asked open-ended autobiographical questions — e.g., “Think about a memorable vacation you’ve taken” — the decoders mostly produced noise. 

To Rubin, it gets to the very root of thought itself, and the limitations of peering into someone else’s brain. “When I’m thinking, I’m hearing my own voice saying things; I always have an internal monologue,” he said. “But that’s not necessarily a universal experience.” Lots of people don’t hear words when they think to themselves, he said; people who primarily use sign language may experience thought as visualizing hand movements. 

“It gets at an area of neuroscience that we’re just starting to really have the framework to think about,” Rubin said. “This notion that a representation of speech is probably distinct from a representation of language.” 

“It gets at an area of neuroscience that we’re just starting to really have the framework to think about.”

Daniel Rubin

Next-generation implants are likely to pack 10 times as many electrodes into the same space, vastly expanding the neural signals researchers can tap, Rubin said. He believes it will make a big difference for future users. 

“The thing that we lay out is, because this is research, we can’t guarantee that things are going to work perfectly,” he said. “All of our participants know that, and they say, ‘You know what, it would be great if this is something that I could use and was helpful for me.’ But they really get involved so that people who have paralysis years from now will have a better experience and an improved quality of life.” 

At least for some of the participants, it’s already having an impact. Two of the four people profiled in the study use their BCI as their primary mode of communication.


Continue Reading