判别式
计算机科学
人工智能
模式识别(心理学)
生成对抗网络
深度学习
生成语法
融合
特征(语言学)
表达式(计算机科学)
代表(政治)
班级(哲学)
面部表情
哲学
语言学
政治
政治学
法学
程序设计语言
作者
Zhe Sun,Hehao Zhang,Jiatong Bai,Mingyang Liu,Zhengping Hu
标识
DOI:10.1016/j.patcog.2022.109157
摘要
• A discriminatively deep fusion approach is proposed that based on an improved conditional generative adversarial network (im-cGAN) for facial expression recognition. • The proposed im-cGAN model is able to generate more labelled samples by only using the images with the partial set of action units. • Our approach achieves the discriminative representations by fusing global and local features from the generated images and regional patches. • We designed the D-loss function that succeeds in expanding the inter-class distance and reducing the intra-class distance simultaneously. Considering most deep learning-based methods heavily depend on huge labels, it is still a challenging issue for facial expression recognition to extract discriminative features of training samples with limited labels. Given above, we propose a discriminatively deep fusion (DDF) approach based on an improved conditional generative adversarial network (im-cGAN) to learn abstract representation of facial expressions. First, we employ facial images with action units (AUs) to train the im-cGAN to generate more labeled expression samples. Subsequently, we utilize global features learned by the global-based module and the local features learned by the region-based module to obtain the fused feature representation. Finally, we design the discriminative loss function (D-loss) that expands the inter-class variations while minimizing the intra-class distances to enhance the discrimination of fused features. Experimental results on JAFFE, CK+, Oulu-CASIA, and KDEF datasets demonstrate the proposed approach is superior to some state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI