计算机科学
凝视
估计
人工智能
领域(数学分析)
机器学习
计算机视觉
数学
数学分析
经济
管理
作者
Sihui Zhang,Yi Tian,Yilei Zhang,Mei Tian,Yaping Huang
标识
DOI:10.1109/tmm.2024.3358948
摘要
Unsupervised domain adaptive (UDA) gaze estimation aims to predict gaze directions of unlabeled target face or eye images given a set of annotated source images, which has been widely applied in practical applications. However, existing methods still perform poorly due to two major challenges. 1) There exists large personalized differences and style discrepancies between source and target samples, which leads the learned source model easily collapsing to biased results; 2) Data uncertainties inherent in reference samples will affect the generalization ability of their models. To tackle the above challenges, in this paper, we propose a novel Domain-Consistent and Uncertainty-Aware (DCUA) network for generalizable gaze estimation. Our DCUA network employs a two-phase framework where a primary training sub-network (PTNet) and a refined adaptation sub-network (RANet) are trained on the source and target domain, respectively. Firstly, to obtain robust and pure gaze-related features, we propose twain domain consistent constraints, that is, the intra-domain consistent constraint and the inter-domain consistent constraint. These two constraints could eliminate the impact of gaze-irrelevant factors by maintaining consistency between label and feature space. Secondly, to further improve the adaptability of our model, we propose dual uncertainty perception modules, which include an intrinsic uncertainty module and an extrinsic uncertainty module. These modules help DCUA network distinguish inferior reference samples and avoid overfitting to them. Experiments on four cross-domain gaze estimation tasks demonstrate the effectiveness of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI