Advertising or adversarial? AdvSign: Artistic advertising sign camouflage for target physical attacking to object detector.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Deep learning models are often vulnerable to adversarial attacks in both digital and physical environments. Particularly challenging are physical attacks that involve subtle, unobtrusive modifications to objects, such as patch-sticking or light-shooting, designed to maliciously alter the model's output when the scene is captured and fed into the model. Developing physical adversarial attacks that are robust, flexible, inconspicuous, and difficult to trace remains a significant challenge. To address this issue, we propose an artistic-based camouflage named Adversarial Advertising Sign (AdvSign) for object detection task, especially in autonomous driving scenarios. Generally, artistic patterns, such as brand logos and advertisement signs, always have a high tolerance for visual incongruity and are widely exist with strong unobtrusiveness. We design these patterns into advertising signs that can be attached to various mobile carriers, such as carry-bags and vehicle stickers, to create adversarial camouflage with strong untraceability. This method is particularly effective at misleading self-driving cars, for instance, causing them to misidentify these signs as 'stop' signs. Our approach combines a trainable adversarial patch with various signs of artistic patterns to create advertising patches. By leveraging the diversity and flexibility of these patterns, we draw attention away from the conspicuous adversarial elements, enhancing the effectiveness and subtlety of our attacks. We then use the CARLA autonomous-driving simulator to place these synthesized patches onto 3D flat surfaces in different traffic scenes, rendering 2D composite scene images from various perspectives. These varied scene images are then input into the target detector for adversarial training, resulting in the final trained adversarial patch. In particular, we introduce a novel loss with artistic pattern constraints, designed to differentially adjust pixels within and outside the advertising sign during training. Extensive experiments in both simulated (composite scene images with AdvSign) and real-world (printed AdvSign images) environments demonstrate the effectiveness of AdvSign in executing physical attacks on state-of-the-art object detectors, such as YOLOv5. Our training strategy, leveraging diverse scene images and varied artistic transformations to adversarial patches, enables seamless integration with multiple patterns. This enhances attack effectiveness across various physical settings and allows easy adaptation to new environments and artistic patterns.

Authors

  • Guangyu Gao
    School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China. Electronic address: guangyugao@bit.edu.cn.
  • Zhuocheng Lv
    School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China.
  • Yan Zhang
    Affiliated Hospital of Liaoning University of Traditional Chinese Medicine, Shenyang, 110032, China.
  • A K Qin
    Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne, Australia.