计算机科学
图形
人工智能
自然语言处理
理论计算机科学
作者
Wenwen Gong,Yangli‐ao Geng,Dan Zhang,Yifan Zhu,Xiaolong Xu,Haolong Xiang,Amin Beheshti,Xuyun Zhang,Lianyong Qi
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2025-04-11
卷期号:39 (16): 16853-16861
标识
DOI:10.1609/aaai.v39i16.33852
摘要
Graph Contrastive Learning (GCL), as a primary paradigm of graph self-supervised learning, spurs a fruitful line of research in tackling the data sparsity issue by maximizing the consistency of user/item embeddings between different augmented views with random perturbations. However, diversity, as a crucial metric for recommendation performance and user satisfaction, has received rather little attention. In fact, there exists a challenging dilemma in balancing accuracy and diversity. To address these issues, we propose a new Graph Contrastive Learning (DivGCL) model for diversifying recommendations. Inspired by the excellence of the determinant point process (DPP), DivGCL adopts a DPP likelihood-based loss function to achieve an ideal trade-off between diversity and accuracy, optimizing it jointly with the advanced Gaussian noise-augmented GCL objective. Extensive experiments on four popular datasets demonstrate that DivGCL surpasses existing approaches in balancing accuracy and diversity, with an improvement of 23.47% at T@20 (abbreviation for trade-off metric) on ML-1M.
科研通智能强力驱动
Strongly Powered by AbleSci AI