作者
Yu Ru Su,Shuanghong Shen,Linbo Zhu,Le Wu,Zhenya Huang,Zeyu Cheng,Qi Liu,Shijin Wang
摘要
Student performance prediction is a critical task in supporting decision-making for Intelligent Tutoring Systems (ITS). Correct predictions of student performance are prerequisites for ITS to supply intelligent services and optimize learning efficiency, e.g., recommending the most appropriate learning resources for each student. Existing methods mainly include cognitive diagnosis and knowledge tracing, both of which focus on students' cognitive modeling based on their interactions on a sequence of items and give predictions by assessing if their cognitive states can meet the item requirements. Specifically, cognitive diagnosis only considers students' global static cognitive states, while knowledge tracing focuses on students' local dynamics. However, both global and local features are critical for predicting student performance. Therefore, in this paper, we propose a novel Global and Local Neural Cognitive (GLNC) model to capture both global and local features in student-item interactions for more accurate predictions. Specifically, we first learn students' global cognitive level according to all student-item interactions. Then, we propose a self-attentive encoder based on the scaled dot-product attention mechanism to extract the local cognitive dynamics and the dependencies between students' recent interactions. Finally, to obtain better predictions, we design a fused gate based on the similarity between students' recently responded items and the item to be predicted to adaptively combine the global and local features. To evaluate the effectiveness of GLNC, we compare it with both cognitive diagnosis and knowledge tracing methods. All experiments are conducted on three public datasets that contain real student-item interactions on mathematics or language courses from various ITS. The experimental results demonstrate that GLNC achieves an average score of 0.7810 on the AUC metric, 0.7627 on the ACC metric, 0.3987 on the RMSE metric, 0.2023 on the r2 metric, respectively achieving an average improvement of 1.84%, 1.07%, 1.87%, and 11.38% in contrast to existing state-of-the-art methods (i.e., NCD and LPKT). Moreover, we further analyze the performance of GLNC under different probabilities of guessing and slipping, the results indicate that GLNC is more robust against the influence of noisy data while considering both global and local features. Benefiting from the superior accuracy and stability, our proposed GLNC has a wide range of potential implications for ITS, which can be easily applied to improve students' learning efficiency and experience.