计算机科学
推荐系统
机器学习
过程(计算)
人工智能
监督学习
人工神经网络
操作系统
作者
Yanling Wang,Yuchen Liu,Qian Wang,Cong Wang,Chenliang Li
标识
DOI:10.1145/3539618.3591751
摘要
Self-supervised learning (SSL) has been recently applied to sequential recommender systems to provide high-quality user representations. However, while facilitating the learning process recommender systems, SSL is not without security threats: carefully crafted inputs can poison the pre-trained models driven by SSL, thus reducing the effectiveness of the downstream recommendation model. This work shows that poisoning attacks against the pre-training stage threaten sequential recommender systems. Without any background knowledge of the model architecture and parameters, nor any API queries, our strategy proves the feasibility of poisoning attacks on mainstream SSL-based recommender schemes as well as on commonly used datasets. By injecting only a tiny amount of fake users, we get the target item recommended to real users more than thousands of times as before, demonstrating that recommender systems have a new attack surface due to SSL. We further show our attack is challenging for recommendation platforms to detect and defend. Our work highlights the weakness of self-supervised recommender systems and shows the necessity for researchers to be aware of this security threat. Our source code is available at https://github.com/CongGroup/Poisoning-SSL-based-RS.
科研通智能强力驱动
Strongly Powered by AbleSci AI