心理学
积极倾听
感知
神经计算语音处理
听觉感知
言语感知
听力学
交叉模态
集合(抽象数据类型)
语音识别
视觉感受
沟通
认知心理学
神经科学
计算机科学
医学
程序设计语言
出处
期刊:Cortex
[Elsevier]
日期:2022-04-11
卷期号:152: 21-35
被引量:5
标识
DOI:10.1016/j.cortex.2022.03.013
摘要
During speaking or listening, endogenous motor or exogenous visual processes have been shown to fine-tune the auditory neural processing of incoming acoustic speech signal. To compare the impact of these cross-modal effects on auditory evoked responses, two sets of speech production and perception tasks were contrasted using EEG. In a first set, participants produced vowels in a self-paced manner while listening to their auditory feedback. Following the production task, they passively listened to the entire recorded speech sequence. In a second set, the procedure was identical except that participants also watched online their own articulatory movements. While both endogenous motor and exogenous visual processes fine-tuned auditory neural processing, these cross-modal effects were found to act differentially on the amplitude and latency of auditory evoked responses. A reduced amplitude was observed on auditory evoked responses during speaking compared to listening, irrespective of the auditory or audiovisual feedback. Adding orofacial visual movements to the acoustic speech signal also speeded up the latency of auditory evoked responses, irrespective of the perception or production task. Taken together, these results suggest distinct motor and visual influences on auditory neural processing, possibly through different neural gating and predictive mechanisms.
科研通智能强力驱动
Strongly Powered by AbleSci AI