Understanding Flow Experience in Video Learning by Multimodal Data

人工智能 无聊 模式 计算机科学 机器学习 多任务学习 多层感知器 均方误差 语音识别 模式识别(心理学) 人工神经网络 任务(项目管理) 统计 心理学 数学 社会心理学 社会科学 管理 社会学 经济
作者
Yankai Wang,Bing Chen,Hongyan Liu,Zhiguo Hu
出处
期刊:International Journal of Human-computer Interaction [Informa]
卷期号:40 (12): 3144-3158 被引量:4
标识
DOI:10.1080/10447318.2023.2181878
摘要

Video-based learning has successfully become an effective alternative to face-to-face instruction. In such situations, modeling or predicting learners' flow experience during video learning is critical for enhancing the learning experience and advancing learning technologies. In this study, we set up an instructional scenario for video learning according to flow theory. Different learning states, i.e., boredom, fit (flow), and anxiety, were successfully induced by varying the difficulty levels of the learning task. We collected learners' electrocardiogram (ECG) signals as well as facial video, upper body posture and speech data during the learning process. We proposed classification models of the learning state and regression models to predict flow experience by utilizing different combinations of the data from the four modalities. The results showed that the model performance of learning state recognition was significantly improved by the decision-level fusion of multimodal data. By using the selected important features from all data sources, such as the standard deviation of normal to normal R-R intervals (SDNN), high-frequency (HF) heart rate variability and mel-frequency cepstral coefficients (MFCC), the multilayer perceptron (MLP) classifier gave the best recognition result of learning states (i.e., mean AUC of 0.780). The recognition accuracy of boredom, fit (flow) and anxiety reached 47.48%, 80.89% and 47.41%, respectively. For flow experience prediction, the MLP regressor based on the fusion of two modalities (i.e., ECG and posture) achieved the optimal prediction (i.e., mean RMSE of 0.717). This study demonstrates the feasibility of modeling and predicting the flow experience in video learning by combining multimodal data.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
wzz发布了新的文献求助10
1秒前
sc有鱼完成签到 ,获得积分10
1秒前
小静完成签到,获得积分10
1秒前
2秒前
2222bbnm发布了新的文献求助10
2秒前
3秒前
当女遇到乔发布了新的文献求助150
4秒前
小鱼儿发布了新的文献求助10
4秒前
Galaxy完成签到,获得积分10
5秒前
李冰完成签到,获得积分10
5秒前
7秒前
kkc发布了新的文献求助30
7秒前
十三应助努力的学牲采纳,获得10
8秒前
8秒前
Zehn发布了新的文献求助10
8秒前
solar完成签到 ,获得积分10
9秒前
充电宝应助yuanyuanli采纳,获得10
10秒前
10秒前
10秒前
向野完成签到,获得积分10
11秒前
烟花应助小姚采纳,获得10
13秒前
哈噗咻关注了科研通微信公众号
14秒前
华仔应助科研通管家采纳,获得10
14秒前
深情安青应助科研通管家采纳,获得10
14秒前
14秒前
14秒前
田様应助科研通管家采纳,获得10
14秒前
科研通AI6应助科研通管家采纳,获得10
14秒前
在水一方应助科研通管家采纳,获得10
14秒前
好困应助科研通管家采纳,获得10
14秒前
大个应助科研通管家采纳,获得10
15秒前
star应助科研通管家采纳,获得10
15秒前
科研通AI6应助科研通管家采纳,获得10
15秒前
搜集达人应助科研通管家采纳,获得10
15秒前
黄诺完成签到,获得积分10
15秒前
在水一方应助科研通管家采纳,获得10
15秒前
852应助科研通管家采纳,获得10
15秒前
谢大喵应助科研通管家采纳,获得10
15秒前
orixero应助科研通管家采纳,获得10
15秒前
yeyu应助科研通管家采纳,获得30
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Petrucci's General Chemistry: Principles and Modern Applications, 12th edition 600
FUNDAMENTAL STUDY OF ADAPTIVE CONTROL SYSTEMS 500
微纳米加工技术及其应用 500
Constitutional and Administrative Law 500
PARLOC2001: The update of loss containment data for offshore pipelines 500
Vertebrate Palaeontology, 5th Edition 420
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5297579
求助须知:如何正确求助?哪些是违规求助? 4446407
关于积分的说明 13839369
捐赠科研通 4331573
什么是DOI,文献DOI怎么找? 2377767
邀请新用户注册赠送积分活动 1373035
关于科研通互助平台的介绍 1338563