计算机科学
显著性(神经科学)
普遍性(动力系统)
人工智能
面子(社会学概念)
过程(计算)
计算机安全
特征(语言学)
生物识别
机器学习
社会科学
语言学
哲学
物理
量子力学
社会学
操作系统
作者
Rui Zhai,Rongrong Ni,Yu Chen,Yang Yu,Yao Zhao
出处
期刊:IEEE Signal Processing Letters
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:30: 1072-1076
标识
DOI:10.1109/lsp.2023.3303782
摘要
The emergence of deep learning has led to the rise of malicious face manipulation applications, which pose a significant threat to face security. In order to prevent the generation of forgery fundamentally, researchers have proposed proactive methods to disrupt the process of manipulation models. However, these methods output distorted images, which exhibit unacceptable black shadows or distorted facial features causing facial stigmatization. To address this issue, we propose a Universal Proactive Warning Defense (UPWD) method, which leads fake images to present a warning pattern against multiple manipulation models. Specifically, we proposed an Invisible Protection Module that generates protection messages and a feature-level measure strategy that enhances the salience of warning patterns. Furthermore, we improve the universality of the method based on Hard Model Meta-learning. Extensive experimental results on CelebA and LFWA datasets demonstrate that our proposed UPWD method effectively defends against multiple manipulation models and outperforms existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI