April 24, 2019 | By Dennis Thompson | Via U.S. News | Personal Injury Technology
Reading the brain waves that control a person’s vocal tract might be the best way to help return a voice to people who’ve lost their ability to speak, a new study suggests.
A brain-machine interface creates natural-sounding synthetic speech by using brain activity to control a “virtual” vocal tract — an anatomically detailed computer simulation that reflects the movements of the lips, jaw, tongue and larynx that occur as a person talks.
This interface created artificial speech that could be accurately understood up to 70% of the time, said study co-author Josh Chartier, a bioengineering graduate student at the University of California, San Francisco (UCSF) Weill Institute for Neuroscience.
The participants involved in this proof-of-concept study still had the ability to speak. They were five patients being treated at the UCSF Epilepsy Center who had electrodes temporarily implanted in their brains to map the source of their seizures, in preparation for neurosurgery.
But researchers believe the speech synthesizer ultimately could help people who’ve lost the ability to talk due to stroke, traumatic brain injury, cancer or neurodegenerative conditions like Parkinson’s disease, multiple sclerosis or amyotrophic lateral sclerosis (Lou Gehrig’s disease).
“We found that the neural code for vocal movements is partially shared across individuals,” said senior researcher Dr. Edward Chang, a professor of neurosurgery at the UCSF School of Medicine. “An artificial vocal tract modeled on one person’s voice can be adapted to synthesize speech from another person’s brain activity,” he explained.
“This means that a speech decoder that’s trained in one person with intact speech could maybe someday act as a starting point for someone who has a speech disability, who could then learn to control the simulated vocal tract using their own brain activity,” Chang said.
Reading brain waves to create ‘synthetic’ speech
Current speech synthesis technology requires people to spell out their thoughts letter-by-letter using devices that track very small eye or facial muscle movements, a laborious and error-prone method, the researchers said.