Defensive approximation: securing CNNs ...
Type de document :
Communication dans un congrès avec actes
DOI :
Titre :
Defensive approximation: securing CNNs using approximate computing
Auteur(s) :
Guesmi, Amira [Auteur]
Alouani, Ihsen [Auteur]
COMmunications NUMériques - IEMN [COMNUM - IEMN]
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 [IEMN]
Khasawneh, Khaled [Auteur]
George Mason University [Fairfax]
Baklouti, Mouna [Auteur]
Frikha, Tarek [Auteur]
Abid, Mohamed [Auteur]
Abu-Ghazaleh, Nael [Auteur]
University of California [Riverside] [UC Riverside]
Alouani, Ihsen [Auteur]
COMmunications NUMériques - IEMN [COMNUM - IEMN]
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 [IEMN]
Khasawneh, Khaled [Auteur]
George Mason University [Fairfax]
Baklouti, Mouna [Auteur]
Frikha, Tarek [Auteur]
Abid, Mohamed [Auteur]
Abu-Ghazaleh, Nael [Auteur]
University of California [Riverside] [UC Riverside]
Titre de la manifestation scientifique :
26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'21
Ville :
Virtual, USA
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2021-04-19
Titre de la revue :
Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'21
Éditeur :
ACM
Date de publication :
2021-04
Mot(s)-clé(s) en anglais :
Computer systems organization
Dependable and fault-tolerant systems and networks
Redundancy
Embedded and cyber-physical systems
Embedded systems
Robotics
Networks
Network properties
Network reliability
Dependable and fault-tolerant systems and networks
Redundancy
Embedded and cyber-physical systems
Embedded systems
Robotics
Networks
Network properties
Network reliability
Discipline(s) HAL :
Sciences de l'ingénieur [physics]
Informatique [cs]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseaux et télécommunications [cs.NI]
Sciences de l'ingénieur [physics]/Traitement du signal et de l'image [eess.SP]
Sciences de l'ingénieur [physics]/Electronique
Informatique [cs]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseaux et télécommunications [cs.NI]
Sciences de l'ingénieur [physics]/Traitement du signal et de l'image [eess.SP]
Sciences de l'ingénieur [physics]/Electronique
Résumé en anglais : [en]
In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these ...
Lire la suite >In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks: Inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. The transferability is even poorer for the black-box attack scenarios, where adversarial attacks are generated using a proxy model. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: In this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong transferability-based attacks along with up to 50% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4 dB degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier.Lire moins >
Lire la suite >In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks: Inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. The transferability is even poorer for the black-box attack scenarios, where adversarial attacks are generated using a proxy model. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: In this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong transferability-based attacks along with up to 50% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4 dB degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Source :
Fichiers
- http://arxiv.org/pdf/2006.07700
- Accès libre
- Accéder au document
- 2006.07700
- Accès libre
- Accéder au document