计算机科学
协同过滤
隐马尔可夫模型
人工智能
马尔可夫过程
机器学习
马尔可夫链
图形
背景(考古学)
马尔可夫模型
理论计算机科学
推荐系统
数据挖掘
数学
统计
生物
古生物学
作者
Jun Hu,Bryan Hooi,Shengsheng Qian,Changsheng Xu,Changsheng Xu
出处
期刊:IEEE Transactions on Knowledge and Data Engineering
[Institute of Electrical and Electronics Engineers]
日期:2024-07-01
卷期号:36 (7): 3281-3296
被引量:11
标识
DOI:10.1109/tkde.2023.3348537
摘要
Graph Neural Networks (GNNs) have recently been utilized to build Collaborative Filtering (CF) models to predict user preferences based on historical user-item interactions. However, there is relatively little understanding of how GNN-based CF models relate to some traditional Network Representation Learning (NRL) approaches. In this paper, we show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding. Based on a Markov process that trades off two types of distances, we present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models. Instead of considering the GNN as a trainable black box that propagates learnable user/item vertex embeddings, we treat GNNs as an untrainable Markov process that can construct constant context features of vertices for a traditional NRL model that encodes context features with a fully-connected layer. Such simplification can help us to better understand how GNNs benefit CF models. Especially, it helps us realize that ranking losses play crucial roles in GNN-based CF tasks. With our proposed simple yet powerful ranking loss InfoBPR, the NRL model can still perform well without the context features constructed by GNNs. We conduct experiments to perform detailed analysis on MGDCF.
科研通智能强力驱动
Strongly Powered by AbleSci AI