分而治之算法
计算机科学
瓶颈
Boosting(机器学习)
可执行文件
集合预报
任务(项目管理)
人工智能
机器学习
算法
管理
经济
嵌入式系统
操作系统
作者
Zhuo Ma,Xinjing Liu,Yang Liu,Ximeng Liu,Zhan Qin,Kui Ren
出处
期刊:IEEE Transactions on Dependable and Secure Computing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-05
卷期号:20 (6): 4810-4822
被引量:5
标识
DOI:10.1109/tdsc.2023.3234355
摘要
Recently, model stealing attacks are widely studied but most of them are focused on stealing a single non-discrete model, e.g., neural networks. For ensemble models, these attacks are either non-executable or suffer from intolerant performance degradation due to the complex model structure (multiple sub-models) and the discreteness possessed by the sub-model (e.g., decision trees). To overcome the bottleneck, this paper proposes a divide-and-conquer strategy called DivTheft to formulate the model stealing attack to common ensemble models by combining active learning (AL). Specifically, based on the boosting learning concept, we divide a hard ensemble model stealing task into multiple simpler ones about single sub-model stealing. Then, we adopt AL to conquer the data-free sub-model stealing task. During the process, the current AL algorithm easily causes the stolen model to be biased because of ignoring the past useful memories. Thus, DivTheft involves a newly designed uncertainty sampling scheme to filter reusable samples from the previously used ones. Experiments show that compared with the prior work, DivTheft can save almost 50% queries while ensuring a competitive agreement rate to the victim model.
科研通智能强力驱动
Strongly Powered by AbleSci AI