面部表情
心理学
语音识别
表达式(计算机科学)
厌恶
信号(编程语言)
听力学
认知心理学
沟通
计算机科学
医学
精神科
程序设计语言
愤怒
作者
Chloé Stoll,Helen Rodger,Junpeng Lao,Anne-Raphaëlle Richoz,Olivier Pascalis,Matthew Dye,Roberto Caldara
标识
DOI:10.1093/deafed/enz023
摘要
Abstract We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.
科研通智能强力驱动
Strongly Powered by AbleSci AI