How to Retrain Recommender System?

再培训 计算机科学 过度拟合 推荐系统 遗忘 人工智能 机器学习 数据建模 学习迁移 期限(时间) 忠诚 组分(热力学) 人工神经网络 数据库 哲学 业务 物理 热力学 国际贸易 电信 量子力学 语言学
作者
Yang Zhang,Fuli Feng,Chenxu Wang,Xiangnan He,Meng Wang,Yan Li,Yongdong Zhang
标识
DOI:10.1145/3397271.3401167
摘要

Practical recommender systems need be periodically retrained to refresh the model with new interaction data. To pursue high model fidelity, it is usually desirable to retrain the model on both historical and new data, since it can account for both long-term and short-term user preference. However, a full model retraining could be very time-consuming and memory-costly, especially when the scale of historical data is large. In this work, we study the model retraining mechanism for recommender systems, a topic of high practical values but has been relatively little explored in the research community. Our first belief is that retraining the model on historical data is unnecessary, since the model has been trained on it before. Nevertheless, normal training on new data only may easily cause overfitting and forgetting issues, since the new data is of a smaller scale and contains fewer information on long-term user preference. To address this dilemma, we propose a new training method, aiming to abandon the historical data during retraining through learning to transfer the past training experience. Specifically, we design a neural network-based transfer component, which transforms the old model to a new model that is tailored for future recommendations. To learn the transfer component well, we optimize the "future performance" -- i.e., the recommendation accuracy evaluated in the next time period. Our Sequential Meta-Learning(SML) method offers a general training paradigm that is applicable to any differentiable model. We demonstrate SML on matrix factorization and conduct experiments on two real-world datasets. Empirical results show that SML not only achieves significant speed-up, but also outperforms the full model retraining in recommendation accuracy, validating the effectiveness of our proposals. We release our codes at: https://github.com/zyang1580/SML.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
雪落你看不见完成签到,获得积分10
4秒前
geold完成签到,获得积分10
7秒前
彩色的芷容完成签到 ,获得积分10
8秒前
含糊的大侠完成签到 ,获得积分10
9秒前
9秒前
11秒前
卑微学术人完成签到 ,获得积分10
15秒前
乐乐应助迅速的雨泽采纳,获得10
16秒前
八段锦完成签到,获得积分10
17秒前
kkkkfox发布了新的文献求助10
17秒前
烟花应助科研通管家采纳,获得10
19秒前
雪山飞狐小叮叮完成签到,获得积分20
24秒前
年轻的凝云完成签到 ,获得积分10
28秒前
Lisztan完成签到,获得积分10
34秒前
豆子完成签到 ,获得积分10
40秒前
确幸完成签到,获得积分10
41秒前
55秒前
huiluowork完成签到 ,获得积分10
58秒前
隔壁巷子里的劉完成签到 ,获得积分10
59秒前
59秒前
ccy完成签到,获得积分10
1分钟前
1分钟前
萨尔莫斯完成签到,获得积分10
1分钟前
liuzhigang完成签到 ,获得积分10
1分钟前
kkkkfox发布了新的文献求助10
1分钟前
科研通AI2S应助沈慧采纳,获得10
1分钟前
一一应助沈慧采纳,获得10
1分钟前
violetlishu完成签到 ,获得积分10
1分钟前
ChenSSS发布了新的文献求助10
1分钟前
wushuimei完成签到 ,获得积分10
1分钟前
123完成签到,获得积分10
1分钟前
yangy115完成签到,获得积分10
1分钟前
1分钟前
1分钟前
沈慧完成签到,获得积分20
1分钟前
鳌小饭完成签到 ,获得积分10
1分钟前
CC发布了新的文献求助10
1分钟前
诚心不凡发布了新的文献求助10
1分钟前
CC完成签到,获得积分10
1分钟前
1分钟前
高分求助中
Solution Manual for Strategic Compensation A Human Resource Management Approach 1200
Natural History of Mantodea 螳螂的自然史 1000
Glucuronolactone Market Outlook Report: Industry Size, Competition, Trends and Growth Opportunities by Region, YoY Forecasts from 2024 to 2031 800
A Photographic Guide to Mantis of China 常见螳螂野外识别手册 800
Treatise on Estuarine and Coastal Science (Second Edition) Volume 3: Biogeochemical Cycling 2024 500
Zeitschrift für Orient-Archäologie 500
Smith-Purcell Radiation 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3341917
求助须知:如何正确求助?哪些是违规求助? 2969256
关于积分的说明 8638010
捐赠科研通 2648930
什么是DOI,文献DOI怎么找? 1450469
科研通“疑难数据库(出版商)”最低求助积分说明 671917
邀请新用户注册赠送积分活动 660991