Softmax函数
卷积神经网络
计算机科学
人工智能
分类器(UML)
模式识别(心理学)
网络体系结构
利用
面子(社会学概念)
深度学习
特征(语言学)
面部识别系统
计算机视觉
社会科学
计算机安全
社会学
语言学
哲学
作者
Ali Raza Shahid,Hong Yan
标识
DOI:10.1016/j.knosys.2023.110451
摘要
Facial expression recognition (FER) using a deep convolutional neural network (DCNN) is important and challenging. Although a substantial effort is made to increase FER accuracy through DCNN, previous studies are still not sufficiently generalisable for real-world applications. Traditional FER studies are mainly limited to controlled lab-posed frontal facial images, which lack the challenges of motion blur, head poses, occlusions, face deformations and lighting under uncontrolled conditions. In this work, we proposed a SqueezExpNet architecture that can take advantage of local and global facial information for a highly accurate FER system that can handle environmental variations. Our network was divided into two stages: a geometrical attention stage that possesses a SqueezeNet-like architecture to obtain local highlight information and a spatial texture stage comprising several squeezed and expanded layers to exploit high-level global features. In particular, we created a weighted mask of 3D face landmarks and used element-wise multiplication with a spatial feature in the first stage to draw attention to important local facial regions. Next, we input the face spatial image and its augmentations into the second stage of the network. Finally, like a classifier, a recurrent neural network was designed to collaborate the highlighted information from dual stages rather than simply using the SoftMax function, thereby aiding in overcoming the uncertainties. Experiments covering basic and compound FER tasks were performed using the three leading facial expression datasets. Our strategy outperformed the existing DCNN methods and achieved state-of-the-art results. The developed architecture, adopted research methodology and reported findings may find potential applications of real-time FER in surveillance, health and feedback systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI