对抗制
翻译(生物学)
计算机科学
图像(数学)
图像翻译
人工智能
面子(社会学概念)
计算机视觉
自然语言处理
计算机安全
心理学
语言学
哲学
生物
基因
信使核糖核酸
生物化学
作者
Nataniel Ruiz,Sarah Adel Bargal,Stan Sclaroff
标识
DOI:10.1007/978-3-030-66823-5_14
摘要
Face modification systems using deep learning have become increasingly powerful and accessible. Given images of a person’s face, such systems can generate new images of that same person under different expressions and poses. Some systems can also modify targeted attributes such as hair color or age. This type of manipulated images and video have been coined Deepfakes. In order to prevent a malicious user from generating modified images of a person without their consent we tackle the new problem of generating adversarial attacks against such image translation systems, which disrupt the resulting output image. We call this problem disrupting deepfakes. Most image translation architectures are generative models conditioned on an attribute (e.g. put a smile on this person’s face). We are first to propose and successfully apply (1) class transferable adversarial attacks that generalize to different classes, which means that the attacker does not need to have knowledge about the conditioning class, and (2) adversarial training for generative adversarial networks (GANs) as a first step towards robust image translation networks. Finally, in our scenario, the deepfaker can adaptively blur the image and potentially mount a successful defense against disruption. We present a spread-spectrum adversarial attack, which evades blur defenses. We open-source our code.
科研通智能强力驱动
Strongly Powered by AbleSci AI