联营
人工智能
判别式
计算机科学
卷积神经网络
变压器
Boosting(机器学习)
模式识别(心理学)
安全性令牌
机器学习
计算
面部表情
语音识别
工程类
电气工程
计算机安全
电压
算法
作者
Fanglei Xue,Qiangchang Wang,Zichang Tan,Zhongsong Ma,Guodong Guo
出处
期刊:IEEE Transactions on Affective Computing
[Institute of Electrical and Electronics Engineers]
日期:2022-12-05
卷期号:14 (4): 3244-3256
被引量:55
标识
DOI:10.1109/taffc.2022.3226473
摘要
Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI