AI returns voice to paralysed woman after 18 years

News
AI brain signals
Geralt

A stroke survivor has been given back the ability to speak for the first time in 18 years, after a breakthrough brain implant and digital avatar have enabled interpretation of her facial expressions.

When Canadian high-school maths teacher Ann suffered a brainstem stroke at the age of 30 that left her severely paralysed, she lost entire muscular control and couldn’t even breathe unassisted. It took several years of physical therapy before Ann could move her facial muscles enough to laugh or cry.

According to the American Stroke Association, brain stem strokes can impair any or all functions of the central nervous system, including consciousness, motor control, blood pressure, and breathing. More severe brain stem strokes can cause locked-in syndrome, a condition in which survivors can move only their eyes.

Using the new device – implanted on her skull and connected to wires, and known as a brain-computer interface (BCI) – that enables her to use small movements of her head to type on a computer screen, Ann said: “Overnight, everything was taken from me. I had a 13-month-old daughter, an eight-year-old stepson, and a 26-month-old marriage.”

The implant is part of a study conducted by researchers at UC San Francisco (UCSF) and UC Berkeley, and published in the journal Nature, who are developing new brain-computer technology to allow people like Ann to communicate through a digital avatar that resembles a person. It is the first time that either speech or facial expressions have been synthesised from brain signals.

Signals from the brain are decoded into text at a rate of approximately 80 words per minute; the normal rate of human speech is roughly 160 words per minute. However, Ann’s current communication device – which permits a computerised voice with a British accent – can only deliver a rate of 14 words per minute. However, that didn’t stop her writing a paper in 2020 about her life since her stroke for a psychology class, letter by letter.

“Locked-in syndrome, or LIS, is just like it sounds,” Ann wrote. “You’re fully cognisant, you have full sensation, all five senses work, but you are locked inside a body where no muscles work. I learned to breathe on my own again, I now have full neck movement, my laugh returned, I can cry and read and, over the years, my smile has returned, and I am able to wink and say a few words.”

Ann learned about the study in 2021, after reading about a paralysed man named Pancho who had similarly suffered a brain stem stroke and helped the researchers translate his brain signals into text when he attempted to speak.

Edward Chang, MD and chair of neurological surgery at UCSF, as well as a member of the UCSF Weill Institute for Neurosciences and Jeanne Robertson Distinguished Professor, has worked on the study’s technology for more than a decade. He hopes that this recent breakthrough will lead to an FDA-approved system that enables speech from brain signals in the near future. He commented:

“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others. These advancements bring us much closer to making this a real solution for patients.”

The implant is a paper-thin rectangle of 253 electrodes, positioned onto the surface of the brain over areas critical for speech, so that the electrodes – connected to a bank of computers by cable in a port on Ann’s head – can intercept brain signals which, had Ann not suffered the stroke, would otherwise go to muscles in her lips, tongue, jaw, and larynx.

Kaylo Littlejohn, a graduate student working with Edward Chang, and also Gopala Anumanchipalli, PhD, a professor of electrical engineering and computer sciences at UC Berkeley, said: “We’re making up for the connections between her brain and vocal tract that have been severed by the stroke.”

Using AI, the BRAVO3 team worked over many weeks training the system’s algorithms to recognise Ann’s unique brain signals for speech, repeating phrases from a 1,024-word vocabulary over and over until patterns for speech were identified. The system decoded words from smaller phonemic components, needing to learn only 39 phonemes to decipher any word in English.

Sean Metzger, a graduate student in the joint Bioengineering programme at UCSF and UC Berkeley, who developed the text decoder together with fellow student on the course Alex Silva, said: “The accuracy, speed, and vocabulary are crucial. It’s what gives Ann the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations.”

The next step is for the team to create a wireless version, which wouldn’t require Ann to be physically connected to the BCI.

Co-first author of the study, David Moses, PhD, an adjunct professor in neurological surgery, explained: “Giving people like Ann the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions.”

Ann now aspires to become a counsellor in a physical rehabilitation facility. She wrote: “I want patients there to see me and know their lives are not over now. I want to show them that disabilities don’t need to stop us or slow us down.”

In a separate study conducted by researchers at Stanford University, as the BBC reports, similar work with a neuroprosthesis has been undertaken with a motor neurone disease (MND) patient, implanting four sensors “the size of pills” into 68-year-old Pat Bennett’s brain at the key areas for speech production.

Image by Geralt from Pixabay.