计算机科学
视频编辑
后期制作
多媒体
管道(软件)
认知负荷
点(几何)
眼动
过程(计算)
质量(理念)
视频制作
剪辑
人机交互
人工智能
认知
心理学
哲学
几何学
数学
认识论
神经科学
程序设计语言
操作系统
作者
Eugene Hwang,Jeongmi Lee
标识
DOI:10.1016/j.ijhcs.2023.103161
摘要
Recently there has been a surge in demand for online video-based learning, and the importance of high-quality educational videos is ever-growing. However, a uniform format of videos that neglects individual differences and the labor-intensive process of editing are major setbacks in producing effective educational videos. This study aims to resolve the issues by proposing an automatic lecture video editing pipeline based on each individual’s attention pattern. In this pipeline, the eye-tracking data are obtained while each individual watches virtual lectures, which later go through multiple filters to define the viewer’s locus of attention and to select the appropriate shot at each time point to create personalized videos. To assess the effectiveness of the proposed method, video characteristics, subjective evaluations of the learning experience, and objective eye-movement features were compared between differently edited videos (attention-based, randomly edited, professionally edited). The results showed that our method dramatically reduced the editing time, with similar video characteristics to those of professionally edited versions. Attention-based versions were also evaluated to be significantly better than randomly edited ones, and as effective as professionally edited ones. Eye-tracking results indicated that attention-based videos have the potential to decrease the cognitive load of learners. These results suggest that attention-based automatic editing can be a viable or even a better alternative to the human expert-dependent approach, and individually-tailored videos have the potential to heighten the learning experience and effect.
科研通智能强力驱动
Strongly Powered by AbleSci AI