计算机科学
手势
面部表情
光学(聚焦)
人工智能
编码(内存)
卷积神经网络
表达式(计算机科学)
代表(政治)
语音识别
模式识别(心理学)
计算机视觉
物理
光学
政治
程序设计语言
法学
政治学
作者
Wei Jie,Guanyu Hu,Xinyu Yang,Luu Anh Tuan,Yizhuo Dong
标识
DOI:10.1016/j.eswa.2023.121419
摘要
Recent research has shown that facial expressions and body gestures are two significant implications in identifying human emotions. However, these studies mainly focus on contextual information of adjacent frames, and rarely explore the spatio-temporal relationships between distant or global frames. In this paper, we revisit the facial expression and body gesture emotion recognition problems, and propose to improve the performance of video emotion recognition by extracting the spatio-temporal features via further encoding temporal information. Specifically, for facial expression, we propose a super image-based spatio-temporal convolutional model (SISTCM) and a two-stream LSTM model to capture the local spatio-temporal features and learn global temporal cues of emotion changes. For body gestures, a novel representation method and an attention-based channel-wise convolutional model (ACCM) are introduced to learn key joints features and independent characteristics of each joint. Extensive experiments on five common datasets are carried out to prove the superiority of the proposed method, and the results proved learning two visual information leads to significant improvement over the existing state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI