强化学习
计算机科学
蒸馏
路径(计算)
国家(计算机科学)
人工智能
机器学习
算法
化学
有机化学
程序设计语言
作者
Qingyao Li,Wei Xia,Liang Yin,Jiarui Jin,Yong Yu
标识
DOI:10.1145/3637528.3671872
摘要
Educational recommendation seeks to suggest knowledge concepts that match a learner's ability, thus facilitating a personalized learning experience. In recent years, reinforcement learning (RL) methods have achieved considerable results by taking the encoding of the learner's exercise log as the state and employing an RL-based agent to make suitable recommendations. However, these approaches suffer from handling the diverse and dynamic learner's knowledge states. In this paper, we introduce the privileged feature distillation technique and propose the P rivileged K nowledge S tate D istillation (PKSD ) framework, allowing the RL agent to leverage the "actual'' knowledge state as privileged information in the state encoding to help tailor recommendations to meet individual needs. Concretely, our PKSD takes the privileged knowledge states together with the representations of the exercise log for the state representations during training. And through distillation, we transfer the ability to adapt to learners to aknowledge state adapter. During inference, theknowledge state adapter would serve as the estimated privileged knowledge states instead of the real one since it is not accessible. Considering that there are strong connections among the knowledge concepts in education, we further propose to collaborate the graph structure learning for concepts into our PKSD framework. This new approach is termed GEPKSD (Graph-Enhanced PKSD). As our method is model-agnostic, we evaluate PKSD and GEPKSD by integrating them with five different RL bases on four public simulators, respectively. Our results verify that PKSD can consistently improve the recommendation performance with various RL methods, and our GEPKSD could further enhance the effectiveness of PKSD in all the simulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI