计算机科学
数字水印
水印
子网
稳健性(进化)
人工智能
学习迁移
深度学习
机器学习
无损压缩
嵌入
利用
计算机安全
数据压缩
图像(数学)
生物化学
化学
基因
作者
Ju Jia,Yueming Wu,Anran Li,Siqi Ma,Yang Liu
出处
期刊:IEEE Transactions on Dependable and Secure Computing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-16
被引量:5
标识
DOI:10.1109/tdsc.2022.3194704
摘要
Recently, considerable progress has been made in providing solutions to prevent intellectual property (IP) theft for deep neural networks (DNNs) in ideal classification or recognition scenarios. However, little work has been dedicated to protecting the IP of DNN models in the context of transfer learning. Moreover, knowledge transfer is usually achieved through knowledge distillation or cross-domain distribution adaptation techniques, which will easily lead to the failure of the IP protection due to the high risk of the underlying DNN watermark being corrupted. To address this issue, we propose a subnetwork-lossless robust DNN watermarking (SRDW) framework, which can exploit out-of-distribution (OOD) guidance data augmentation to boost the robustness of watermarking. Specifically, we accurately seek the most rational modification structure (i.e., core subnetwork) using the module risk minimization, and then calculate the contrastive alignment error and the corresponding hash value as the reversible compensation information for the restoration of carrier network. Experimental results show that our scheme has superior robustness against various hostile attacks, such as fine-tuning, pruning, cross-domain matching, and overwriting. In the absence of malicious jamming attacks, the core subnetwork can be recovered without any loss. Besides that, we investigate how embedding watermarks in batch normalization (BN) layers affect the generalization performance of the deep transfer learning models, which reveals that reducing the embedding modifications in BN layers can further promote the robustness to resist hostile attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI