计算机科学
深度学习
人工智能
人工神经网络
对抗制
建筑
模式(遗传算法)
先验与后验
机器学习
网络体系结构
计算机安全
认识论
哲学
艺术
视觉艺术
作者
Xing Hu,Ling Liang,Xiaobing Chen,Lei Deng,Yu Ji,Yufei Ding,Zidong Du,Qi Guo,Timothy Sherwood,Yuan Xie
标识
DOI:10.1109/tc.2022.3148235
摘要
As deep neural networks (DNNs) continue to find applications in ever more domains, the exact nature of the neural network architecture becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. While prior work has explored aspects of the risk associated with model leakage, exactly which parts of the model are most sensitive and how one infers the full architecture of the DNN when nothing is known about the structure a priori are problems that have been left unexplored. In this paper we address this gap, first by presenting a schema for reasoning about model leakage holistically, and then by proposing and quantitatively evaluating DeepSniffer, a novel learning-based model extraction framework that uses no prior knowledge of the victim model. DeepSniffer is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models and that the extracted models significantly improve attempts at crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer .
科研通智能强力驱动
Strongly Powered by AbleSci AI