凝视
模式
多模式学习
计算机科学
感知
模态(人机交互)
模式(计算机接口)
活动识别
人机交互
人工智能
机器学习
心理学
社会科学
神经科学
社会学
作者
R.H. Zhu,Shi Liang,Yunpeng Song,Zhongmin Cai
出处
期刊:Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies
[Association for Computing Machinery]
日期:2023-09-27
卷期号:7 (3): 1-35
摘要
E-learning has emerged as an indispensable educational mode in the post-epidemic era. However, this mode makes it difficult for students to stay engaged in learning without appropriate activity monitoring. Our work explores a promising solution that combines gaze and mouse data to recognize students' activities, thereby facilitating activity monitoring and analysis during e-learning. We initially surveyed 200 students from a local university, finding more acceptance for eye trackers and mouse loggers compared to video surveillance. We then designed eight students' routine digital activities to collect a multimodal dataset and analyze the patterns and correlations between gaze and mouse across various activities. Our proposed Joint Cross-Attention Fusion Net, a multimodal activity recognition framework, leverages the gaze-mouse relationship to yield improved classification performance by integrating cross-modal representations through a cross-attention mechanism and integrating the joint features that characterize gaze-mouse coordination. Evaluation results show that our method can achieve up to 94.87% F1-score in predicting 8-classes activities, with an improvement of at least 7.44% over using gaze or mouse data independently. This research illuminates new possibilities for monitoring student engagement in intelligent education systems, also suggesting a promising strategy for melding perception and action modalities in behavioral analysis across a range of ubiquitous computing environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI