信息过载
计算机科学
推荐系统
编码
图形
机器学习
任务(项目管理)
知识图
人工智能
理论计算机科学
万维网
生物化学
化学
管理
经济
基因
作者
Hao Wang,Yao Xu,Cheng Yang,Chuan Shi,Xin Li,Ning Guo,Zhiyuan Liu
标识
DOI:10.1145/3539597.3570483
摘要
By jointly modeling user-item interactions and knowledge graph (KG) information, KG-based recommender systems have shown their superiority in alleviating data sparsity and cold start problems. Recently, graph neural networks (GNNs) have been widely used in KG-based recommendation, owing to the strong ability of capturing high-order structural information. However, we argue that existing GNN-based methods have the following two limitations. Interaction domination: the supervision signal of user-item interaction will dominate the model training, and thus the information of KG is barely encoded in learned item representations; Knowledge overload: KG contains much recommendation-irrelevant information, and such noise would be enlarged during the message aggregation of GNNs. The above limitations prevent existing methods to fully utilize the valuable information lying in KG. In this paper, we propose a novel algorithm named Knowledge-Adaptive Contrastive Learning (KACL) to address these challenges. Specifically, we first generate data augmentations from user-item interaction view and KG view separately, and perform contrastive learning across the two views. Our design of contrastive loss will force the item representations to encode information shared by both views, thereby alleviating the interaction domination issue. Moreover, we introduce two learnable view generators to adaptively remove task-irrelevant edges during data augmentation, and help tolerate the noises brought by knowledge overload. Experimental results on three public benchmarks demonstrate that KACL can significantly improve the performance on top-K recommendation compared with state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI