Gradual Surrogate Gradient Learning in Deep Spiking Neural Networks

Spiking Neural Network (SNN) is a promising solution for ultra-low-power hardware. Recent SNNs have reached the performance of Deep Neural Networks (DNNs) in dealing with many tasks. However, these methods often suffer from a long simulation time to achieve the accurate spike train information. In a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) s. 8927 - 8931
Hlavní autoři: Chen, Yi, Zhang, Silin, Ren, Shiyu, Qu, Hong
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 23.05.2022
Témata:
ISSN:2379-190X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Spiking Neural Network (SNN) is a promising solution for ultra-low-power hardware. Recent SNNs have reached the performance of Deep Neural Networks (DNNs) in dealing with many tasks. However, these methods often suffer from a long simulation time to achieve the accurate spike train information. In addition, these methods are contingent on a well-designed initialization to effectively transmit the gradient information. To address these issues, we propose the Internal Spiking Neuron Model (ISNM), which uses the synaptic current instead of spike trains as the carrier of information. In addition, we design a gradual surrogate gradient learning algorithm to ensure that SNNs effectively back-propagate gradient information in the early stage of training and more accurate gradient information in the later stage of training. The experiments on various network structures on CIFAR-10 and CIFAR-100 datasets show that the proposed method can exceed the performance of previous SNN methods within 5 time steps.
ISSN:2379-190X
DOI:10.1109/ICASSP43922.2022.9746774