定位
计算机科学
水准点(测量)
图形
人工智能
编码(集合论)
基线(sea)
卷积神经网络
源代码
模式识别(心理学)
理论计算机科学
机器学习
程序设计语言
海洋学
地质学
集合(抽象数据类型)
地理
大地测量学
作者
Shukang Yin,Shiwei Wu,Tong Xu,Shifeng Liu,Sirui Zhao,Enhong Chen
标识
DOI:10.1109/icme55011.2023.00047
摘要
Automatic Micro-Expression (ME) spotting in long videos is a crucial step in ME analysis but also a challenging task due to the short duration and low intensity of MEs. When solving this problem, previous works generally lack in considering the structures of human faces and the correspondence between expressions and relevant facial muscles. To address this issue for better performance of ME spotting, this paper seeks to extract finer spatial features by modeling the relationships between facial Regions of Interest (ROIs). Specifically, we propose a graph convolutional-based network, called Action-Unit-aWare Graph Convolutional Network (AUW-GCN). In addition, to incorporate prior knowledge and address the issue of small datasets, AU-related statistics are encoded into the network. Comprehensive experiments show that our results outperform baseline methods consistently and achieve new SOTA performance in two benchmark datasets, CAS(ME) 2 and SAMM-LV. Our code is available at https://github.com/xjtupanda/AUW-GCN.
科研通智能强力驱动
Strongly Powered by AbleSci AI