已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

How to Retrain Recommender System?

再培训 计算机科学 过度拟合 推荐系统 遗忘 人工智能 机器学习 数据建模 学习迁移 期限(时间) 忠诚 组分(热力学) 人工神经网络 数据库 电信 语言学 哲学 物理 量子力学 国际贸易 业务 热力学
作者
Yang Zhang,Fuli Feng,Chenxu Wang,Xiangnan He,Meng Wang,Yan Li,Yongdong Zhang
标识
DOI:10.1145/3397271.3401167
摘要

Practical recommender systems need be periodically retrained to refresh the model with new interaction data. To pursue high model fidelity, it is usually desirable to retrain the model on both historical and new data, since it can account for both long-term and short-term user preference. However, a full model retraining could be very time-consuming and memory-costly, especially when the scale of historical data is large. In this work, we study the model retraining mechanism for recommender systems, a topic of high practical values but has been relatively little explored in the research community. Our first belief is that retraining the model on historical data is unnecessary, since the model has been trained on it before. Nevertheless, normal training on new data only may easily cause overfitting and forgetting issues, since the new data is of a smaller scale and contains fewer information on long-term user preference. To address this dilemma, we propose a new training method, aiming to abandon the historical data during retraining through learning to transfer the past training experience. Specifically, we design a neural network-based transfer component, which transforms the old model to a new model that is tailored for future recommendations. To learn the transfer component well, we optimize the "future performance" -- i.e., the recommendation accuracy evaluated in the next time period. Our Sequential Meta-Learning(SML) method offers a general training paradigm that is applicable to any differentiable model. We demonstrate SML on matrix factorization and conduct experiments on two real-world datasets. Empirical results show that SML not only achieves significant speed-up, but also outperforms the full model retraining in recommendation accuracy, validating the effectiveness of our proposals. We release our codes at: https://github.com/zyang1580/SML.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
风趣小蜜蜂完成签到 ,获得积分10
1秒前
云凡应助绝不熬夜采纳,获得10
1秒前
唠叨的纸飞机完成签到,获得积分10
1秒前
lx完成签到,获得积分10
2秒前
冰子完成签到 ,获得积分10
2秒前
清爽的诗云完成签到 ,获得积分10
3秒前
lx840518完成签到 ,获得积分10
3秒前
勤奋的立果完成签到 ,获得积分10
4秒前
neonsun完成签到,获得积分0
4秒前
胡茶茶完成签到 ,获得积分10
5秒前
Cell完成签到 ,获得积分10
5秒前
William_l_c完成签到,获得积分10
6秒前
瞬间de回眸完成签到 ,获得积分10
6秒前
Chen完成签到 ,获得积分10
7秒前
Vision820完成签到,获得积分10
8秒前
coolkid应助蜜呐采纳,获得10
9秒前
逍遥小书生完成签到 ,获得积分10
9秒前
9秒前
DreamMaker完成签到,获得积分10
11秒前
Dou完成签到,获得积分10
11秒前
板凳完成签到 ,获得积分10
12秒前
机智若云完成签到,获得积分10
12秒前
HUO完成签到 ,获得积分10
12秒前
wyz完成签到 ,获得积分10
13秒前
有川洋一完成签到 ,获得积分10
13秒前
安静的嘚嘚完成签到 ,获得积分10
13秒前
小湛完成签到 ,获得积分10
14秒前
KK完成签到 ,获得积分10
14秒前
星辰大海应助科研通管家采纳,获得10
15秒前
15秒前
852应助科研通管家采纳,获得10
15秒前
个性的汲应助科研通管家采纳,获得10
15秒前
15秒前
Bizibili完成签到,获得积分10
15秒前
15秒前
天人合一完成签到,获得积分10
16秒前
耶格尔完成签到 ,获得积分10
16秒前
kikiii完成签到,获得积分10
16秒前
随风完成签到,获得积分10
17秒前
mrjohn完成签到,获得积分10
17秒前
高分求助中
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Technical Brochure TB 814: LPIT applications in HV gas insulated switchgear 1000
Immigrant Incorporation in East Asian Democracies 500
Nucleophilic substitution in azasydnone-modified dinitroanisoles 500
不知道标题是什么 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3965466
求助须知:如何正确求助?哪些是违规求助? 3510780
关于积分的说明 11155030
捐赠科研通 3245229
什么是DOI,文献DOI怎么找? 1792783
邀请新用户注册赠送积分活动 874088
科研通“疑难数据库(出版商)”最低求助积分说明 804171