分类
观察员(物理)
面部表情
人工智能
计算机科学
心理学
模式识别(心理学)
认知心理学
自然语言处理
物理
量子力学
作者
Martin Wegrzyn,Laura Münst,Jessica König,Maximilian Dinter,Johanna Kißler
标识
DOI:10.1016/j.actpsy.2024.104569
摘要
According to one prominent model, facial expressions of emotion can be categorized into depicting happiness, disgust, anger, sadness, fear and surprise. One open question is which facial features observers use to recognize the different expressions and whether the features indicated by observers can be used to predict which expression they saw. We created fine-grained maps of diagnostic facial features by asking participants to use mouse clicks to highlight those parts of a face that they deem useful for recognizing its expression. We tested how well the resulting maps align with models of emotion expressions (based on Action Units) and how the maps relate to the accuracy with which observers recognize full or partly masked faces. As expected, observers focused on the eyes and mouth regions in all faces. However, each expression deviated from this global pattern in a unique way, allowing to create maps of diagnostic face regions. Action Units considered most important for expressing an emotion were highlighted most often, indicating their psychological validity. The maps of facial features also allowed to correctly predict which expression a participant had seen, with above-chance accuracies for all expressions. For happiness, fear and anger, the face half which was highlighted the most was also the half whose visibility led to higher recognition accuracies. The results suggest that diagnostic facial features are distributed in unique patterns for each expression, which observers seem to intuitively extract and use when categorizing facial displays of emotion.
科研通智能强力驱动
Strongly Powered by AbleSci AI