Understanding Flow Experience in Video Learning by Multimodal Data

人工智能 无聊 模式 计算机科学 机器学习 多任务学习 多层感知器 均方误差 语音识别 模式识别(心理学) 人工神经网络 任务(项目管理) 统计 心理学 数学 社会心理学 社会学 经济 社会科学 管理
作者
Yankai Wang,Bing Chen,Hongyan Liu,Zhiguo Hu
出处
期刊:International Journal of Human-computer Interaction [Taylor & Francis]
卷期号:40 (12): 3144-3158 被引量:4
标识
DOI:10.1080/10447318.2023.2181878
摘要

Video-based learning has successfully become an effective alternative to face-to-face instruction. In such situations, modeling or predicting learners' flow experience during video learning is critical for enhancing the learning experience and advancing learning technologies. In this study, we set up an instructional scenario for video learning according to flow theory. Different learning states, i.e., boredom, fit (flow), and anxiety, were successfully induced by varying the difficulty levels of the learning task. We collected learners' electrocardiogram (ECG) signals as well as facial video, upper body posture and speech data during the learning process. We proposed classification models of the learning state and regression models to predict flow experience by utilizing different combinations of the data from the four modalities. The results showed that the model performance of learning state recognition was significantly improved by the decision-level fusion of multimodal data. By using the selected important features from all data sources, such as the standard deviation of normal to normal R-R intervals (SDNN), high-frequency (HF) heart rate variability and mel-frequency cepstral coefficients (MFCC), the multilayer perceptron (MLP) classifier gave the best recognition result of learning states (i.e., mean AUC of 0.780). The recognition accuracy of boredom, fit (flow) and anxiety reached 47.48%, 80.89% and 47.41%, respectively. For flow experience prediction, the MLP regressor based on the fusion of two modalities (i.e., ECG and posture) achieved the optimal prediction (i.e., mean RMSE of 0.717). This study demonstrates the feasibility of modeling and predicting the flow experience in video learning by combining multimodal data.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
坤123发布了新的文献求助10
刚刚
刚刚
2秒前
2秒前
木子发布了新的文献求助10
2秒前
坐看云起时完成签到,获得积分20
3秒前
3秒前
怡然的如冰完成签到 ,获得积分10
3秒前
5秒前
妍妆不施完成签到 ,获得积分10
5秒前
6秒前
6秒前
嗯嗯的嗯嗯完成签到,获得积分10
6秒前
6秒前
6秒前
7秒前
果果完成签到,获得积分10
7秒前
正直小蚂蚁完成签到,获得积分10
7秒前
8秒前
雪白冷风完成签到 ,获得积分10
8秒前
SS发布了新的文献求助10
8秒前
9秒前
田様应助科研通管家采纳,获得10
9秒前
科目三应助科研通管家采纳,获得10
9秒前
9秒前
9秒前
小蘑菇应助科研通管家采纳,获得20
9秒前
香蕉觅云应助科研通管家采纳,获得10
9秒前
9秒前
9秒前
爆米花应助科研通管家采纳,获得10
9秒前
传奇3应助科研通管家采纳,获得50
10秒前
香蕉从安应助科研通管家采纳,获得10
10秒前
打打应助科研通管家采纳,获得10
10秒前
10秒前
天天快乐应助科研通管家采纳,获得10
10秒前
FashionBoy应助单纯蛋挞采纳,获得10
10秒前
QI完成签到,获得积分10
10秒前
不知名的小猪应助yu采纳,获得10
11秒前
11秒前
高分求助中
The Wiley Blackwell Companion to Diachronic and Historical Linguistics 3000
HANDBOOK OF CHEMISTRY AND PHYSICS 106th edition 1000
ASPEN Adult Nutrition Support Core Curriculum, Fourth Edition 1000
Decentring Leadership 800
Signals, Systems, and Signal Processing 610
脑电大模型与情感脑机接口研究--郑伟龙 500
Genera Orchidacearum Volume 4: Epidendroideae, Part 1 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6288580
求助须知:如何正确求助?哪些是违规求助? 8107144
关于积分的说明 16959628
捐赠科研通 5353464
什么是DOI,文献DOI怎么找? 2844772
邀请新用户注册赠送积分活动 1821993
关于科研通互助平台的介绍 1678156