计算机科学
编码器
解码方法
光学(聚焦)
编码(内存)
面子(社会学概念)
过程(计算)
噪音(视频)
人工智能
计算机安全
算法
程序设计语言
社会科学
物理
社会学
光学
图像(数学)
操作系统
作者
Pu Sun,Yuezun Li,Honggang Qi,Siwei Lyu
标识
DOI:10.1109/icip46576.2022.9897756
摘要
We describe a proactive defense method to expose Deep-Fakes with training data contamination. Note that the existing methods usually focus on defending from general DeepFakes, which are synthesized by GAN using random noise. In contrast, our method is dedicated to defending from native Deep-Fakes, which is synthesized by auto-encoder that involves face swapping and encoding-decoding process that general DeepFakes do not have. Specifically, we design two types of traces namely sustainable traces and erasable traces, which are added on the faces to manipulate the training of DeepFake models. Consequently, the trained DeepFake model can synthesize faces with sustainable traces but no erasable traces. With the help of these two traces, we can expose DeepFakes proactively. Our method is compared with recent passive and proactive defense methods, which corroborates the efficacy of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI