2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDCI-1.3
Paper Title ADVERSARIAL ATTACKS ON OBJECT DETECTORS WITH LIMITED PERTURBATIONS
Authors Zhenbo Shi, Wei Yang, Zhenbo Xu, Zhi Chen, Yingjie Li, University of Science and Technology of China, China; Haoran Zhu, University of Queensland, Australia; Liusheng Huang, University of Science and Technology of China, China
SessionCI-1: Theory for Computational Imaging
LocationGather.Town
Session Time:Wednesday, 09 June, 15:30 - 16:15
Presentation Time:Wednesday, 09 June, 15:30 - 16:15
Presentation Poster
Topic Computational Imaging: [IMT] Computational Imaging Methods and Models
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Deep convolutional neural networks are widely witnessed vulnerable to adversarial attacks. Recently, great progress has been achieved in attacking object detectors. However, current attacks neglect the practical utility and rely on global perturbations on the target image with a large number of patches or pixels. In this paper, we present a novel attack framework named DTTACK to fool both one-stage and two-stage object detectors with limited perturbations. A novel divergent patch shape consisting of four intersecting lines is proposed to effectively affect deep convolutional feature extraction with limited pixels. In particular, we introduce an instance-aware heat map as a self-attention module to help DTTACK focus on salient object areas, which further improves the attacking performance. Extensive experiments on PASCAL-VOC, MS-COCO, as well as an online detection system demonstrate that DTTACK surpasses the state-of-the-art methods by large margins.