TAIL: Exploiting Temporal Asynchronous Execution for Efficient Spiking Neural Networks with Inter-Layer Parallelism

Spiking neural networks (SNNs) are an alternative computational paradigm to artificial neural networks (ANNs) that have attracted attention due to their event-driven execution mechanisms, enabling extremely low energy consumption. However, the existing SNN execution model, based on software simulati...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings - Design, Automation, and Test in Europe Conference and Exhibition s. 1 - 7
Hlavní autoři: Li, Haomin, Liu, Fangxin, Wang, Zongwu, Lyu, Dongxu, Huang, Shiyuan, Yang, Ning, Sun, Qi, Song, Zhuoran, Jiang, Li
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: EDAA 31.03.2025
Témata:
ISSN:1558-1101
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Spiking neural networks (SNNs) are an alternative computational paradigm to artificial neural networks (ANNs) that have attracted attention due to their event-driven execution mechanisms, enabling extremely low energy consumption. However, the existing SNN execution model, based on software simulation or synchronized hardware circuitry, is incompatible with the event-driven nature, thus resulting in poor performance and energy efficiency. The challenge arises from the fact that neuron computations across multiple time steps result in increased latency and energy consumption. To overcome this bottleneck and leverage the full potential of SNNs, we propose TAIL, a pioneering temporal asynchronous execution mechanism for SNNs driven by a comprehensive analysis of SNN computations. Additionally, we propose an efficient dataflow design to support SNN inference, enabling concurrent computation of various time steps across multiple layers for optimal Processing Element (PE) utilization. Our evaluations show that TAIL greatly improves the performance of SNN inference, achieving a 6.94× speedup and a 6.97× increase in energy efficiency on current SNN computing platforms.
ISSN:1558-1101
DOI:10.23919/DATE64628.2025.10993093