对抗制
数字水印
人工神经网络
计算机科学
人工智能
图像(数学)
作者
Zihan Yuan,Xinpeng Zhang,Zichi Wang,Zhaoxia Yin
出处
期刊:IEEE transactions on emerging topics in computational intelligence
[Institute of Electrical and Electronics Engineers]
日期:2024-03-18
卷期号:8 (4): 2775-2790
标识
DOI:10.1109/tetci.2024.3372373
摘要
Deep neural networks (DNNs) may be subject to various modifications during transmission and use. Regular processing operations do not affect the functionality of a model, while malicious tampering will cause serious damage. Therefore, it is crucial to determine the availability of a DNN model. To address this issue, we propose a semi-fragile black-box watermarking method that can distinguish between accidental modification and malicious tampering of DNNs, focusing on the privacy and security of neural network models. Specifically, for a given model, a strategy is designed to generate semi-fragile and sensitive samples using adversarial example techniques without decreasing the model accuracy. The model outputs for these samples are extremely sensitive to malicious tampering and robust to accidental modification. According to these properties, accidental modification and malicious tampering can be distinguished to assess the availability of a watermarked model. Extensive experiments demonstrate that the proposed method can detect malicious model tampering with high accuracy up to 100% while tolerating accidental modifications such as fine-tuning, pruning, and quantitation with the accuracy exceed 75%. Moreover, our semi-fragile neural network watermarking approach can be easily extended to various DNNs.
科研通智能强力驱动
Strongly Powered by AbleSci AI