面部动作编码系统
面部表情
悲伤
厌恶
相似性(几何)
幸福
面子(社会学概念)
心理学
表达式(计算机科学)
认知心理学
情感表达
情绪分类
鉴定(生物学)
计算机科学
人工智能
沟通
愤怒
社会心理学
图像(数学)
生物
社会学
程序设计语言
植物
社会科学
作者
Martin Wegrzyn,Maria Saleti Lock Vogt,Berna Kireclioglu,Julia Schneider,Johanna Kissler
出处
期刊:PLOS ONE
[Public Library of Science]
日期:2017-05-11
卷期号:12 (5): e0177239-e0177239
被引量:172
标识
DOI:10.1371/journal.pone.0177239
摘要
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
科研通智能强力驱动
Strongly Powered by AbleSci AI