Domain-Consistent and Uncertainty-Aware Network for Generalizable Gaze Estimation

计算机科学 凝视 估计 人工智能 领域(数学分析) 机器学习 计算机视觉 数学 数学分析 经济 管理
作者
Sihui Zhang,Yi Tian,Yilei Zhang,Mei Tian,Yaping Huang
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:26: 6996-7011
标识
DOI:10.1109/tmm.2024.3358948
摘要

Unsupervised domain adaptive (UDA) gaze estimation aims to predict gaze directions of unlabeled target face or eye images given a set of annotated source images, which has been widely applied in practical applications. However, existing methods still perform poorly due to two major challenges. 1) There exists large personalized differences and style discrepancies between source and target samples, which leads the learned source model easily collapsing to biased results; 2) Data uncertainties inherent in reference samples will affect the generalization ability of their models. To tackle the above challenges, in this paper, we propose a novel Domain-Consistent and Uncertainty-Aware (DCUA) network for generalizable gaze estimation. Our DCUA network employs a two-phase framework where a primary training sub-network (PTNet) and a refined adaptation sub-network (RANet) are trained on the source and target domain, respectively. Firstly, to obtain robust and pure gaze-related features, we propose twain domain consistent constraints, that is, the intra-domain consistent constraint and the inter-domain consistent constraint. These two constraints could eliminate the impact of gaze-irrelevant factors by maintaining consistency between label and feature space. Secondly, to further improve the adaptability of our model, we propose dual uncertainty perception modules, which include an intrinsic uncertainty module and an extrinsic uncertainty module. These modules help DCUA network distinguish inferior reference samples and avoid overfitting to them. Experiments on four cross-domain gaze estimation tasks demonstrate the effectiveness of our method.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
kexing应助屈春洋采纳,获得10
1秒前
1秒前
搜集达人应助清爽的碧空采纳,获得10
3秒前
xiao发布了新的文献求助10
3秒前
赘婿应助猫毛采纳,获得10
3秒前
3秒前
小二郎应助Mito2009采纳,获得10
3秒前
科研通AI6.3应助挚友采纳,获得10
4秒前
小宇发布了新的文献求助10
4秒前
Ava应助谦让泽洋采纳,获得10
4秒前
严玉慧发布了新的文献求助10
5秒前
6秒前
6秒前
科研通AI6.3应助鲤鱼诗桃采纳,获得10
7秒前
7秒前
7秒前
领导范儿应助李晓航采纳,获得10
7秒前
Owen应助王科婷采纳,获得10
7秒前
fudge完成签到,获得积分10
8秒前
大个应助Aurora采纳,获得30
8秒前
8秒前
9秒前
9秒前
汤鱼完成签到 ,获得积分10
9秒前
科研通AI6.2应助Evan采纳,获得10
10秒前
10秒前
赘婿应助imi采纳,获得10
11秒前
11秒前
iNk应助科研通管家采纳,获得10
11秒前
iNk应助科研通管家采纳,获得10
11秒前
大个应助科研通管家采纳,获得10
11秒前
充电宝应助科研通管家采纳,获得10
11秒前
米酒汤圆应助科研通管家采纳,获得10
11秒前
韩梅发布了新的文献求助10
11秒前
科研通AI2S应助科研通管家采纳,获得10
11秒前
LLLL发布了新的文献求助10
11秒前
烟花应助科研通管家采纳,获得10
11秒前
芽芽应助科研通管家采纳,获得20
11秒前
12秒前
ceeray23应助科研通管家采纳,获得10
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Aerospace Standards Index - 2026 ASIN2026 3000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
Research Methods for Business: A Skill Building Approach, 9th Edition 500
Social Work and Social Welfare: An Invitation(7th Edition) 410
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6049219
求助须知:如何正确求助?哪些是违规求助? 7836705
关于积分的说明 16262425
捐赠科研通 5194524
什么是DOI,文献DOI怎么找? 2779531
邀请新用户注册赠送积分活动 1762773
关于科研通互助平台的介绍 1644807