构音障碍
失语症
失用症
语音识别
计算机科学
任务(项目管理)
脑电图
听力学
集合(抽象数据类型)
心理学
认知心理学
医学
神经科学
经济
管理
程序设计语言
作者
Gautam Krishna,Mason Carnahan,Shilpa Shamapant,Yashitha Surendranath,Saumya Jain,Arundhati Ghosh,Co Tran,José del R. Millán,Ahmed H. Tewfik
标识
DOI:10.1109/embc46164.2021.9629802
摘要
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously with aphasia, apraxia, and dysarthria speech. We demonstrate a significant decoding performance improvement by more than 50% during test time for isolated speech recognition task and we also provide preliminary results indicating performance improvement for the more challenging continuous speech recognition task by utilizing EEG features. The results presented in this paper show the first step towards demonstrating the possibility of utilizing non-invasive neural signals to design a real-time robust speech prosthetic for stroke survivors recovering from aphasia, apraxia, and dysarthria. Our aphasia, apraxia, and dysarthria speech-EEG data set will be released to the public to help further advance this interesting and crucial research.
科研通智能强力驱动
Strongly Powered by AbleSci AI