计算机科学
眼动
视觉搜索
混合现实
可视化
增强现实
眼球运动
人工智能
数据可视化
计算机视觉
跟踪(教育)
人机交互
认知心理学
心理学
教育学
作者
Francesco Chiossi,Ines Trautmannsheimer,Changkun Ou,Uwe Gruenefeld,Sven Mayer
标识
DOI:10.1109/tvcg.2024.3456172
摘要
Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction.
科研通智能强力驱动
Strongly Powered by AbleSci AI