计算机科学
模式
任务(项目管理)
面部表情
可穿戴计算机
模态(人机交互)
光学(聚焦)
人工智能
人机交互
个性化
认知
任务分析
机器学习
机器人
心理学
工程类
社会科学
物理
系统工程
光学
神经科学
社会学
万维网
嵌入式系统
作者
Ashwin Ramesh Babu,Akilesh Rajavenkatanarayanan,James Brady,Fillia Makedon
标识
DOI:10.1145/3279810.3279849
摘要
Recent developments in computer vision and the emergence of wearable sensors have opened opportunities for the development of advanced and sophisticated techniques to enable multi-modal user assessment and personalized training which is important in educational, industrial training and rehabilitation applications. They have also paved way for the use of assistive robots to accurately assess human cognitive and physical skills. Assessment and training cannot be generalized as the requirement varies for every person and for every application. The ability of the system to adapt to the individual's needs and performance is essential for its effectiveness. In this paper, the focus is on task performance prediction which is an important parameter to consider for personalization. Several research works focus on how to predict task performance based on physiological and behavioral data. In this work, we follow a multi-modal approach where the system collects information from different modalities to predict performance based on (a) User's emotional state recognized from facial expressions(Behavioral data), (b) User's emotional state from body postures(Behavioral data) (c) task performance from EEG signals (Physiological data) while the person performs a robot-based cognitive task. This multi-modal approach of combining physiological data and behavioral data produces the highest accuracy of 87.5 percent, which outperforms the accuracy of prediction extracted from any single modality. In particular, this approach is useful in finding associations between facial expressions, body postures and brain signals while a person performs a cognitive task.
科研通智能强力驱动
Strongly Powered by AbleSci AI