解码方法
计算机科学
语音识别
人工神经网络
人工智能
电信
作者
Xupeng Chen,Ran Wang,Amirhossein Khalilian-Gourtani,Leyao Yu,Patricia Dugan,Daniel Friedman,Werner Doyle,Orrin Devinsky,Yao Wang,Adeen Flinker
标识
DOI:10.1038/s42256-024-00824-8
摘要
Abstract Decoding human speech from neural signals is essential for brain–computer interface (BCI) technologies that aim to restore speech in populations with neurological deficits. However, it remains a highly challenging task, compounded by the scarce availability of neural signals with corresponding speech, data complexity and high dimensionality. Here we present a novel deep learning-based neural speech decoding framework that includes an ECoG decoder that translates electrocorticographic (ECoG) signals from the cortex into interpretable speech parameters and a novel differentiable speech synthesizer that maps speech parameters to spectrograms. We have developed a companion speech-to-speech auto-encoder consisting of a speech encoder and the same speech synthesizer to generate reference speech parameters to facilitate the ECoG decoder training. This framework generates natural-sounding speech and is highly reproducible across a cohort of 48 participants. Our experimental results show that our models can decode speech with high correlation, even when limited to only causal operations, which is necessary for adoption by real-time neural prostheses. Finally, we successfully decode speech in participants with either left or right hemisphere coverage, which could lead to speech prostheses in patients with deficits resulting from left hemisphere damage.
科研通智能强力驱动
Strongly Powered by AbleSci AI