AdvART: Adversarial Art for Camouflaged ...
Type de document :
Communication dans un congrès avec actes
URL permanente :
Titre :
AdvART: Adversarial Art for Camouflaged Object Detection Attacks
Auteur(s) :
Guesmi, Amira [Auteur]
New York University [Abu Dhabi]
Bilasco, Ioan Marius [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Shafique, Muhammad [Auteur]
New York University [Abu Dhabi]
Alouani, Lihsen [Auteur]
Queen's University [Belfast] [QUB]
COMmunications NUMériques - IEMN [COMNUM - IEMN]
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 [IEMN]
New York University [Abu Dhabi]
Bilasco, Ioan Marius [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Shafique, Muhammad [Auteur]
New York University [Abu Dhabi]
Alouani, Lihsen [Auteur]
Queen's University [Belfast] [QUB]
COMmunications NUMériques - IEMN [COMNUM - IEMN]
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 [IEMN]
Titre de la manifestation scientifique :
2024 IEEE International Conference on Image Processing (ICIP)
Ville :
Abu Dhabi
Pays :
Émirats arabes unis
Date de début de la manifestation scientifique :
2024-10-27
Éditeur :
IEEE
Mot(s)-clé(s) en anglais :
Adversarial patch
naturalistic patch
GANs
object detection
adversarial art
physical attacks
yolo
latent space
Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks
as humans can easily detect unnatural manipulations. To address this
recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches
which may seem visually suspicious and evade human’s attention. However
these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper
we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically
we redefine the optimization problem by introducing an additional loss term to the total loss. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. It leverages similarity metrics-based loss that we optimize within the global adversarial objective function. Our technique is based on directly manipulating the pixel values in the patch
which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19% and 72%
respectively
naturalistic patch
GANs
object detection
adversarial art
physical attacks
yolo
latent space
Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks
as humans can easily detect unnatural manipulations. To address this
recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches
which may seem visually suspicious and evade human’s attention. However
these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper
we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically
we redefine the optimization problem by introducing an additional loss term to the total loss. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. It leverages similarity metrics-based loss that we optimize within the global adversarial objective function. Our technique is based on directly manipulating the pixel values in the patch
which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19% and 72%
respectively
Discipline(s) HAL :
Physique [physics]
Sciences de l'ingénieur [physics]
Sciences de l'ingénieur [physics]
Résumé en anglais : [en]
Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the ...
Lire la suite >Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks, as humans can easily detect unnatural manipulations. To address this, recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches, which may seem visually suspicious and evade human’s attention. However, these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper, we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically, we redefine the optimization problem by introducing an additional loss term to the total loss. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. It leverages similarity metrics-based loss that we optimize within the global adversarial objective function. Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge compared to the GAN-based approach.Lire moins >
Lire la suite >Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks, as humans can easily detect unnatural manipulations. To address this, recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches, which may seem visually suspicious and evade human’s attention. However, these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper, we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically, we redefine the optimization problem by introducing an additional loss term to the total loss. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. It leverages similarity metrics-based loss that we optimize within the global adversarial objective function. Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge compared to the GAN-based approach.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Source :
Date de dépôt :
2025-02-26T05:25:47Z