面部表情
心理学
情感表达
语音识别
计算机科学
认知心理学
人工智能
模式识别(心理学)
作者
A Miolla,M Cardaioli,C Scarpazza
标识
DOI:10.3758/s13428-022-01914-4
摘要
Abstract Facial expressions are among the most powerful signals for human beings to convey their emotional states. Indeed, emotional facial datasets represent the most effective and controlled method of examining humans’ interpretation of and reaction to various emotions. However, scientific research on emotion mainly relied on static pictures of facial expressions posed (i.e., simulated) by actors, creating a significant bias in emotion literature. This dataset tries to fill this gap, providing a considerable amount ( N = 1458) of dynamic genuine ( N = 707) and posed ( N = 751) clips of the six universal emotions from 56 participants. The dataset is available in two versions: original clips, including participants’ body and background, and modified clips, where only the face of participants is visible. Notably, the original dataset has been validated by 122 human raters, while the modified dataset has been validated by 280 human raters. Hit rates for emotion and genuineness, as well as the mean, standard deviation of genuineness, and intensity perception, are provided for each clip to allow future users to select the most appropriate clips needed to answer their scientific questions.
科研通智能强力驱动
Strongly Powered by AbleSci AI