计算机科学
推荐系统
脆弱性(计算)
自回归模型
合成数据
数据挖掘
人工智能
机器学习
情报检索
计算机安全
计量经济学
经济
作者
Zhenrui Yue,Zhankui He,Huimin Zeng,Julian McAuley
标识
DOI:10.1145/3460231.3474275
摘要
We investigate whether model extraction can be used to ‘steal’ the weights of sequential recommender systems, and the potential threats posed to victims of such attacks. This type of risk has attracted attention in image and text classification, but to our knowledge not in recommender systems. We argue that sequential recommender systems are subject to unique vulnerabilities due to the specific autoregressive regimes used to train them. Unlike many existing recommender attackers, which assume the dataset used to train the victim model is exposed to attackers, we consider a data-free setting, where training data are not accessible. Under this setting, we propose an API-based model extraction method via limited-budget synthetic data generation and knowledge distillation. We investigate state-of-the-art models for sequential recommendation and show their vulnerability under model extraction and downstream attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI