Securing Deep Spiking Neural Networks ...
Type de document :
Communication dans un congrès avec actes
Titre :
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
Auteur(s) :
El-Allami, R. [Auteur]
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 [IEMN]
Marchisio, A. [Auteur]
Vienna University of Technology = Technische Universität Wien [TU Wien]
Shafique, M. [Auteur]
New York University [Abu Dhabi]
Alouani, Lihsen [Auteur]
COMmunications NUMériques - IEMN [COMNUM - IEMN]
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 [IEMN]
Marchisio, A. [Auteur]
Vienna University of Technology = Technische Universität Wien [TU Wien]
Shafique, M. [Auteur]
New York University [Abu Dhabi]
Alouani, Lihsen [Auteur]
COMmunications NUMériques - IEMN [COMNUM - IEMN]
Titre de la manifestation scientifique :
2021 Design, Automation and Test in Europe Conference and Exhibition (DATE 2021)
Ville :
Grenoble
Pays :
France
Date de début de la manifestation scientifique :
2021-02-01
Titre de la revue :
2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)
Éditeur :
IEEE
Date de publication :
2021
Mot(s)-clé(s) en anglais :
Adversarial Attacks
Analysis
Deep Learning
Machine Learning
Neuromorphic
Optimization
Parameters
Robustness
Security
SNN
Spiking Neural Networks
Analysis
Deep Learning
Machine Learning
Neuromorphic
Optimization
Parameters
Robustness
Security
SNN
Spiking Neural Networks
Discipline(s) HAL :
Informatique [cs]/Réseau de neurones [cs.NE]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest ...
Lire la suite >Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online 11https://github.com/rda-ela/SNN-Adversarial-Attacks to the community for reproducible research. © 2021 EDAA.Lire moins >
Lire la suite >Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online 11https://github.com/rda-ela/SNN-Adversarial-Attacks to the community for reproducible research. © 2021 EDAA.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Source :
Fichiers
- http://arxiv.org/pdf/2012.05321
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-03362270/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-03362270/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- R.El-Allami_2012.05321.pdf
- Accès libre
- Accéder au document
- 2012.05321
- Accès libre
- Accéder au document