心理学
自闭症
多传感器集成
听力学
言语感知
感知
典型地发展
认知
认知心理学
发展心理学
医学
神经科学
作者
Shuyuan Feng,Haoyang Lu,Qiandong Wang,Tianbi Li,Jing Fang,Lihan Chen,Li Yi
摘要
Abstract Autistic children show audiovisual speech integration deficits, though the underlying mechanisms remain unclear. The present study examined how audiovisual speech integration deficits in autistic children could be affected by their looking patterns. We measured audiovisual speech integration in 26 autistic children and 26 typically developing (TD) children (4‐ to 7‐year‐old) employing the McGurk task (a videotaped speaker uttering phonemes with her eyes open or closed) and tracked their eye movements. We found that, compared with TD children, autistic children showed weaker audiovisual speech integration (i.e., the McGurk effect) in the open‐eyes condition and similar audiovisual speech integration in the closed‐eyes condition. Autistic children viewed the speaker's mouth less in non‐McGurk trials than in McGurk trials in both conditions. Importantly, autistic children's weaker audiovisual speech integration could be predicted by their reduced mouth‐looking time. The present study indicated that atypical face‐viewing patterns could serve as one of the cognitive mechanisms of audiovisual speech integration deficits in autistic children. Lay Summary McGurk effect occurs when the visual part of a phoneme (e.g., “ga”) and the auditory part of another phoneme (e.g., “ba”) uttered by a speaker were integrated into a fused perception (e.g., “da”). The present study examined how McGurk effect in autistic children could be affected by their looking patterns for the speaker's face. We found that less looking time for the speaker's mouth in autistic children could predict weaker McGurk effect. As McGurk effect manifests audiovisual speech integration, our findings imply that we could improve audiovisual speech integration in autistic children by directing them to look at the speaker's mouth in future intervention.
科研通智能强力驱动
Strongly Powered by AbleSci AI