判别式
计算机科学
人工智能
分类器(UML)
表达式(计算机科学)
代表(政治)
模式识别(心理学)
相似性(几何)
面部表情
自然语言处理
机器学习
图像(数学)
政治
政治学
法学
程序设计语言
作者
Ming Li,Huazhu Fu,Shengfeng He,Hehe Fan,Jun Liu,Jussi Keppo,Mike Zheng Shou
标识
DOI:10.1109/tmm.2023.3347849
摘要
Learning discriminative and robust representations is important for facial expression recognition (FER) due to subtly different emotional faces and their subjective annotations. Previous works usually address one representation solely because these two goals seem to be contradictory for optimization. Their performances inevitably suffer from challenges from the other representation. In this article, by considering this problem from two novel perspectives, we demonstrate that discriminative and robust representations can be learned in a unified approach, i.e., DR-FER, and mutually benefit each other. Moreover, we make it with the supervision from only original annotations. Specifically, to learn discriminative representations, we propose performing masked image modeling (MIM) as an auxiliary task to force our network to discover expression-related facial areas. This is the first attempt to employ MIM to explore discriminative patterns in a self-supervised manner. To extract robust representations, we present a category-aware self-paced learning schedule to mine high-quality annotated ( easy ) expressions and incorrectly annotated ( hard ) counterparts. We further introduce a retrieval similarity-based relabeling strategy to correct hard expression annotations, exploiting them more effectively. By enhancing the discrimination ability of the FER classifier as a bridge, these two learning goals significantly strengthen each other. Extensive experiments on several popular benchmarks demonstrate the superior performance of our DR-FER. Moreover, thorough visualizations and extra experiments on manually annotation-corrupted datasets show that our approach successfully accomplishes learning both discriminative and robust representations simultaneously.
科研通智能强力驱动
Strongly Powered by AbleSci AI