计算机科学
推荐系统
代表(政治)
透明度(行为)
过程(计算)
图形
协同过滤
特征学习
人工智能
机器学习
匹配(统计)
情报检索
理论计算机科学
数学
政治
政治学
法学
操作系统
统计
计算机安全
作者
Ninghao Liu,Yong Ge,Li Li,Xia Hu,Rui Chen,Soo-Hyun Choi
标识
DOI:10.1145/3340531.3411919
摘要
Recommender systems play a fundamental role in web applications in filtering massive information and matching user interests. While many efforts have been devoted to developing more effective models in various scenarios, the exploration on the explainability of recommender systems is running behind. Explanations could help improve user experience and discover system defects. In this paper, after formally introducing the elements that are related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items. Unlike traditional methods that need to make a trade-off between explainability and effectiveness, the performance of our proposed explainable model is not negatively affected after considering explainability. Finally, comprehensive experiments are conducted to validate the performance of our model as well as explanation faithfulness.
科研通智能强力驱动
Strongly Powered by AbleSci AI