Understanding Flow Experience in Video Learning by Multimodal Data

人工智能 无聊 模式 计算机科学 机器学习 多任务学习 多层感知器 均方误差 语音识别 模式识别(心理学) 人工神经网络 任务(项目管理) 统计 心理学 数学 社会心理学 社会学 经济 社会科学 管理
作者
Yankai Wang,Bing Chen,Hongyan Liu,Zhiguo Hu
出处
期刊:International Journal of Human-computer Interaction [Taylor & Francis]
卷期号:40 (12): 3144-3158 被引量:4
标识
DOI:10.1080/10447318.2023.2181878
摘要

Video-based learning has successfully become an effective alternative to face-to-face instruction. In such situations, modeling or predicting learners' flow experience during video learning is critical for enhancing the learning experience and advancing learning technologies. In this study, we set up an instructional scenario for video learning according to flow theory. Different learning states, i.e., boredom, fit (flow), and anxiety, were successfully induced by varying the difficulty levels of the learning task. We collected learners' electrocardiogram (ECG) signals as well as facial video, upper body posture and speech data during the learning process. We proposed classification models of the learning state and regression models to predict flow experience by utilizing different combinations of the data from the four modalities. The results showed that the model performance of learning state recognition was significantly improved by the decision-level fusion of multimodal data. By using the selected important features from all data sources, such as the standard deviation of normal to normal R-R intervals (SDNN), high-frequency (HF) heart rate variability and mel-frequency cepstral coefficients (MFCC), the multilayer perceptron (MLP) classifier gave the best recognition result of learning states (i.e., mean AUC of 0.780). The recognition accuracy of boredom, fit (flow) and anxiety reached 47.48%, 80.89% and 47.41%, respectively. For flow experience prediction, the MLP regressor based on the fusion of two modalities (i.e., ECG and posture) achieved the optimal prediction (i.e., mean RMSE of 0.717). This study demonstrates the feasibility of modeling and predicting the flow experience in video learning by combining multimodal data.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
dawnfrf完成签到,获得积分10
刚刚
1秒前
1秒前
2秒前
111111111111111完成签到,获得积分10
2秒前
3秒前
4秒前
qqqqgc发布了新的文献求助10
4秒前
4秒前
4秒前
科研通AI5应助CHAosLoopy采纳,获得30
4秒前
白刀发布了新的文献求助10
4秒前
乐乐应助重要钥匙采纳,获得30
5秒前
WHY发布了新的文献求助10
5秒前
恋雅颖月发布了新的文献求助10
6秒前
6秒前
勤恳海雪完成签到,获得积分10
6秒前
CipherSage应助栗子鱼采纳,获得10
6秒前
lmx发布了新的文献求助10
6秒前
hsadu完成签到,获得积分10
6秒前
6秒前
葱油面发布了新的文献求助10
6秒前
梁婷发布了新的文献求助10
7秒前
7秒前
7秒前
yoyo发布了新的文献求助10
8秒前
丘比特应助刘潞敏采纳,获得10
8秒前
科研通AI5应助温柔的迎荷采纳,获得10
9秒前
内向镜子发布了新的文献求助10
9秒前
Hello应助玉于成采纳,获得10
9秒前
希望天下0贩的0应助申申采纳,获得10
9秒前
勤恳海雪发布了新的文献求助10
9秒前
八位元完成签到,获得积分10
9秒前
qqqqgc完成签到,获得积分20
9秒前
华仔应助xl1721采纳,获得10
10秒前
科研通AI2S应助123采纳,获得10
10秒前
11秒前
11秒前
苹果诗珊完成签到,获得积分10
11秒前
杨破玉发布了新的文献求助10
11秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
All the Birds of the World 4000
Production Logging: Theoretical and Interpretive Elements 3000
Musculoskeletal Pain - Market Insight, Epidemiology And Market Forecast - 2034 2000
Am Rande der Geschichte : mein Leben in China / Ruth Weiss 1500
CENTRAL BOOKS: A BRIEF HISTORY 1939 TO 1999 by Dave Cope 1000
Munson, Young, and Okiishi’s Fundamentals of Fluid Mechanics 9 edition problem solution manual (metric) 800
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3748428
求助须知:如何正确求助?哪些是违规求助? 3291391
关于积分的说明 10072942
捐赠科研通 3007152
什么是DOI,文献DOI怎么找? 1651507
邀请新用户注册赠送积分活动 786406
科研通“疑难数据库(出版商)”最低求助积分说明 751719