凝视
计算机科学
杠杆(统计)
适应性
域适应
人工智能
适应(眼睛)
机器学习
领域(数学分析)
数学
分类器(UML)
生物
生态学
物理
光学
数学分析
作者
Ruicong Liu,Yunfei Liu,Haofei Wang,Feng Lu
标识
DOI:10.1109/tpami.2023.3348528
摘要
Appearance-based gaze estimation has garnered increasing attention in recent years. However, deep learning-based gaze estimation models still suffer from suboptimal performance when deployed in new domains, e.g., unseen environments or individuals. In our previous work, we took this challenge for the first time by introducing a plug-and-play method (PnP-GA) to adapt the gaze estimation model to new domains. The core concept of PnP-GA is to leverage the diversity brought by a group of model variants to enhance the adaptability to diverse environments. In this article, we propose the PnP-GA+ by extending our approach to explore the impact of assembling model variants using three additional perspectives: color space, data augmentation, and model structure. Moreover, we propose an intra-group attention module that dynamically optimizes pseudo-labeling during adaptation. Experimental results demonstrate that by directly plugging several existing gaze estimation networks into the PnP-GA+ framework, it outperforms state-of-the-art domain adaptation approaches on four standard gaze domain adaptation tasks on public datasets. Our method consistently enhances cross-domain performance, and its versatility is improved through various ways of assembling the model group.
科研通智能强力驱动
Strongly Powered by AbleSci AI