手势
分散注意力
计算机科学
可用性
平视显示器
人机交互
更安全的
驾驶模拟器
固定(群体遗传学)
任务(项目管理)
工作量
分心驾驶
感知
模拟
计算机视觉
工程类
计算机安全
心理学
神经科学
人口
人口学
系统工程
社会学
操作系统
作者
Yusheng Cao,Lingyu Li,Jiehao Yuan,Myounghoon Jeon
标识
DOI:10.1080/10447318.2024.2387397
摘要
In-vehicle infotainment systems can cause various distractions, increasing the risk of car accidents. To address this problem, mid-air gesture systems have been introduced. This study investigated the potential of a novel interface that integrates a Head-Up Display (HUD) with auditory displays (spearcons: compressed speech) in a gesture-based menu navigation system to minimize visual distraction and improve driving and secondary task performance. The experiment involved 24 participants who navigated through 12 menu items using mid-air gestures while driving on a simulated road under four conditions: HUD (with, without spearcons) and Head-Down Display (HDD) (with, without spearcons). Results showed that the HUD condition significantly outperformed the HDD condition in participants' level 1 situation awareness, perceived workload, menu navigation performance, and system usability. However, there were trade-offs on visual fixation duration on the menu, and lane deviation. These findings will guide future research in developing safer and more effective HUD-supported in-vehicle gesture interaction systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI