Researchers Create A Brain Implant For Near-Real-Time Speech Synthesis
1 min read
Summary
Researchers at UC Berkeley and UC San Francisco have created a near real-time speech neuroprosthesis that restores staggered naturalistic speech.
The system captures signals from the speech sensorimotor cortex in the brain, with an electrode array implanted therein, and uses AI to turn these signals into audio speech output in as little as one second.
To train the AI, researchers used a text-to-speech system to generate simulated target audio for the AI to match during training, as the user being trained was unable to actually vocalise due to paralysis.
The AI decodes speech in 80 millisecond chunks, meaning the neuroprosthesis enables continuous speaking, rather than having to wait for a whole sentence to be decoded.
This is groundbreaking technology, and a huge improvement on previous attempts at brain-to-speech interfaces, which were let down by seriously laggy latency.