Black-Box Adversarial Attacks on Spiking Neural Network for Time Series Data
This paper examines the vulnerability of spiking neural networks (SNNs) trained on time series data to adversarial attacks by employing artificial neural networks as surrogate models. We specifically explore the use of a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) netwo...
Gespeichert in:
| Veröffentlicht in: | 2024 International Conference on Neuromorphic Systems (ICONS) S. 229 - 233 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
30.07.2024
|
| Schlagworte: | |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | This paper examines the vulnerability of spiking neural networks (SNNs) trained on time series data to adversarial attacks by employing artificial neural networks as surrogate models. We specifically explore the use of a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network as surrogates to approximate the dynamics of SNNs. Through our comparative analysis, we found that the LSTM surrogate is particularly effective, reflecting the sequential data processing capabilities similar to SNNs. Using two adversarial attack methods, the Fast Gradient Sign Method (FGSM) and the Carlini & Wagner (C&W) attack, we demonstrate that adversarial examples can significantly degrade the performance of SNNs. Notably, both methods, especially when applied through the LSTM model, were able to reduce the accuracy of the SNN to below the level of random label choice, indicating a severe vulnerability. These results underscore the importance of incorporating robust defense mechanisms against such attacks in the design and deployment of neural networks handling time series data. |
|---|---|
| DOI: | 10.1109/ICONS62911.2024.00040 |