计算机科学
人机交互
个性化
汽车工业
光学(聚焦)
凝视
多通道交互
手势
适应(眼睛)
人工智能
聚类分析
启发式
用户建模
模式
用户界面
工程类
社会学
社会科学
航空航天工程
操作系统
万维网
物理
光学
标识
DOI:10.1145/3536221.3557034
摘要
With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches. Therefore, hand gestures, head pose, eye gaze, and speech have been extensively investigated in automotive applications for object selection and referencing. Despite these significant advances, existing approaches mostly employ a one-model-fits-all approach unsuitable for varying user behavior and individual differences. Moreover, current referencing approaches either consider these modalities separately or focus on a stationary situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints. In this paper, I propose a research plan for a user-centered adaptive multimodal fusion approach for referencing external objects from a moving vehicle. The proposed plan aims to provide an open-source framework for user-centered adaptation and personalization using user observations and heuristics, multimodal fusion, clustering, transfer-of-learning for model adaptation, and continuous learning, moving towards trusted human-centered artificial intelligence.
科研通智能强力驱动
Strongly Powered by AbleSci AI