UCSF technology translates brain signals into text speech

ByDavid Louie and Timothy Didion KGO logo
Friday, July 16, 2021
UCSF brain technology gives paralysis patient speech
Researchers at UCSF have developed technology to process brain signals into text speech.

SAN FRANCISCO (KGO) -- You won't hear his voice clearly, but a brainstem stroke patient is essentially speaking for the first time in 15 years. Or more precisely, his brain is. It's sending signals, which would normally run through his lips or vocal cords. Instead, they're being intercepted by researchers at UCSF using an array of electrodes implanted in his brain, and displayed as text on a screen.



"The task tells him, please say this word at this time, and we take that neural activity," says David Moses, PH.D.



Dr. Moses says the electrodes are connected to the part of the brain responsible for speech. Sophisticated machine learning software is able to recognize the signals, and decode them. So far, the team has been able to build a vocabulary of about 50 fully formed words, enough to create complete sentences.



RELATED: UCSF scientists create decoder to translate brain signals to speech



"To see it all come together in the sentence decoding demonstration, where he is prompted with a sentence and he is trying to say each word, and those words are being decoded," he explains.



The UCSF team says the approach is different than some other computer brain interface systems that tap into the brain's function to relay messages more generally associated with touch. In a project ABC7 profiled earlier this summer, a research team at Stanford used that technique successfully, to allow a patient to type by thinking about writing out the letters.



"We wanted to be able to translate the signals directly to words," says UCSF neurosurgeon and study director Edward Chang, M.D.



Dr. Chang, says both approaches offer unique advantages and could benefit different kinds of patients, depending on their impairment. He says his team's ultimate goal would be to translate the brain signals for the sounds we use to produce words, and create natural human speech.



RELATED: California police officer praised for the way he helped a non-verbal child with autism



"Someday, you know, we want to really think about, how do we synthesize oral speech?" adds Dr. Chang. "You know the way you and I are talking right now, which is on the order of 120 to 150 words per minute."



He says a device to decode human speech in real time is probably still far in the future. But hopefully it's a breakthrough that patients who can't speak for themselves now, will someday be able to tell us about.



The team drew on earlier research at UCSF, which recruited epilepsy patients who agreed to have their brain signals mapped while they were undergoing treatment. That work helped isolate and identify brain signals associated with speech.



Copyright © 2024 KGO-TV. All Rights Reserved.