对抗制
计算机科学
人工智能
分类器(UML)
班级(哲学)
多样性(控制论)
图像(数学)
编码(集合论)
计算机视觉
模式识别(心理学)
程序设计语言
集合(抽象数据类型)
作者
T. B. Brown,Dandelion Mané,Aurko Roy,Martı́n Abadi,Justin Gilmer
出处
期刊:Cornell University - arXiv
日期:2017-01-01
被引量:82
标识
DOI:10.48550/arxiv.1712.09665
摘要
We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class. To reproduce the results from the paper, our code is available at https://github.com/tensorflow/cleverhans/tree/master/examples/adversarial_patch
科研通智能强力驱动
Strongly Powered by AbleSci AI