计算机科学
再培训
理论计算机科学
人工智能
卷积(计算机科学)
合成数据
图形
推荐系统
机器学习
数据挖掘
推论
人工神经网络
业务
国际贸易
作者
Sihao Ding,Fuli Feng,Xiangnan He,Yong Liao,Jun Shi,Yongdong Zhang
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-11
被引量:15
标识
DOI:10.1109/tnnls.2022.3156066
摘要
The real-world recommender system needs to be regularly retrained to keep with the new data. In this work, we consider how to efficiently retrain graph convolution network (GCN)-based recommender models that are state-of-the-art techniques for the collaborative recommendation. To pursue high efficiency, we set the target as using only new data for model updating, meanwhile not sacrificing the recommendation accuracy compared with full model retraining. This is nontrivial to achieve since the interaction data participates in both the graph structure for model construction and the loss function for model learning, whereas the old graph structure is not allowed to use in model updating. Toward the goal, we propose a causal incremental graph convolution (IGC) approach, which consists of two new operators named IGC and colliding effect distillation (CED) to estimate the output of full graph convolution. In particular, we devise simple and effective modules for IGC to ingeniously combine the old representations and the incremental graph and effectively fuse the long- and short-term preference signals. CED aims to avoid the out-of-date issue of inactive nodes that are not in the incremental graph, which connects the new data with inactive nodes through causal inference. In particular, CED estimates the causal effect of new data on the representation of inactive nodes through the control of their collider. Extensive experiments on three real-world datasets demonstrate both accuracy gains and significant speed-ups over the existing retraining mechanism.
科研通智能强力驱动
Strongly Powered by AbleSci AI