计算机科学
模式
对话框
模态(人机交互)
多模式学习
凝视
导师
人工智能
机器学习
万维网
社会科学
社会学
程序设计语言
作者
Jennifer K. Olsen,Kshitij Sharma,Nikol Rummel,Vincent Aleven
摘要
Abstract The analysis of multiple data streams is a long‐standing practice within educational research. Both multimodal data analysis and temporal analysis have been applied successfully, but in the area of collaborative learning, very few studies have investigated specific advantages of multiple modalities versus a single modality, especially combined with temporal analysis. In this paper, we investigate how both the use of multimodal data and moving from averages and counts to temporal aspects in a collaborative setting provides a better prediction of learning gains. To address these questions, we analyze multimodal data collected from 25 9–11‐year‐old dyads using a fractions intelligent tutoring system. Assessing the relation of dual gaze, tutor log, audio and dialog data to students' learning gains, we find that a combination of modalities, especially those at a smaller time scale, such as gaze and audio, provides a more accurate prediction of learning gains than models with a single modality. Our work contributes to the understanding of how analyzing multimodal data in temporal manner provides additional information around the collaborative learning process.
科研通智能强力驱动
Strongly Powered by AbleSci AI