对抗制
计算机科学
对象(语法)
人工智能
利用
观点
方案(数学)
目标检测
计算机视觉
约束(计算机辅助设计)
模式识别(心理学)
计算机安全
工程类
数学
机械工程
艺术
数学分析
视觉艺术
作者
Abeer Toheed,Muhammad Haroon Yousaf,Rabnawaz,Ali Javed
标识
DOI:10.1109/icodt255437.2022.9787422
摘要
Adversarial attacks are being frequently used these days to exploit different machine learning models including the deep neural networks (DNN) either during the training or testing stage. DNN under such attacks make the false predictions. Digital adversarial attacks are not applicable in physical world. Adversarial attack on object detection is more difficult as compared to the adversarial attack on image classification. This paper presents a physical adversarial attack on object detection using 3D adversarial objects. The proposed methodology overcome the constraint of 2D adversarial patches as they only work for certain viewpoints only. We have mapped an adversarial texture onto a mesh to create the 3D adversarial object. These objects are of various shapes and sizes. Unlike adversarial patch attacks, these adversarial objects are movable from one place to another. Moreover, application of 2D patch is limited to confined viewpoints. Experimentation results show that our 3D adversarial objects are free from such constraints and perform a successful attack on object detection. We used the ShapeNet dataset for different vehicle models. 3D objects are created using Blender 2.93 [1]. Different HDR images are incorporated to create the virtual physical environment. Moreover, we targeted the FasterRCNN and YOLO pre-trained models on the COCO dataset as our target DNN. Experimental results demonstrate that our proposed model successfully fooled these object detectors.
科研通智能强力驱动
Strongly Powered by AbleSci AI