语音识别
稳健性(进化)
音节诗
计算机科学
噪音(视频)
人工神经网络
跟踪(教育)
心理学
人工智能
教育学
生物化学
基因
图像(数学)
化学
作者
Yayue Gao,Jianfeng Zhang,Qian Wang
摘要
In a complex auditory scene, speech comprehension involves several stages: for example segregating the target from the background, recognizing syllables and integrating syllables into linguistic units (e.g., words). Although speech segregation is robust as shown by invariant neural tracking to target speech envelope, whether neural tracking to linguistic units is also robust and how this robustness is achieved remain unknown. To investigate these questions, we concurrently recorded neural responses tracking a rhythmic speech stream at its syllabic and word rates, using electroencephalography. Human participants listened to that target speech under a speech or noise distractor at varying signal-to-noise ratios. Neural tracking at the word rate was not as robust as neural tracking at the syllabic rate. Robust neural tracking to target's words was only observed under the speech distractor but not under the noise distractor. Moreover, this robust word tracking correlated with a successful suppression of distractor tracking. Critically, both word tracking and distractor suppression correlated with behavioural comprehension accuracy. In sum, our results suggest that a robust neural tracking of higher-level linguistic units relates to not only the target tracking, but also the distractor suppression.
科研通智能强力驱动
Strongly Powered by AbleSci AI