Black-Box Adversarial Attacks on Spiking Neural Network for Time Series Data

This paper examines the vulnerability of spiking neural networks (SNNs) trained on time series data to adversarial attacks by employing artificial neural networks as surrogate models. We specifically explore the use of a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) netwo...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2024 International Conference on Neuromorphic Systems (ICONS) s. 229 - 233
Hlavní autoři: Hutchins, Jack, Ferrer, Diego, Fillers, James, Schuman, Catherine
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 30.07.2024
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper examines the vulnerability of spiking neural networks (SNNs) trained on time series data to adversarial attacks by employing artificial neural networks as surrogate models. We specifically explore the use of a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network as surrogates to approximate the dynamics of SNNs. Through our comparative analysis, we found that the LSTM surrogate is particularly effective, reflecting the sequential data processing capabilities similar to SNNs. Using two adversarial attack methods, the Fast Gradient Sign Method (FGSM) and the Carlini & Wagner (C&W) attack, we demonstrate that adversarial examples can significantly degrade the performance of SNNs. Notably, both methods, especially when applied through the LSTM model, were able to reduce the accuracy of the SNN to below the level of random label choice, indicating a severe vulnerability. These results underscore the importance of incorporating robust defense mechanisms against such attacks in the design and deployment of neural networks handling time series data.
DOI:10.1109/ICONS62911.2024.00040