The Deep Neural Network Compilers for FPGAs: A Survey

In embedded systems, the low-latency, low-power consumption, and reconfigurable FPGA component undertakes the inference operations of a large number of computationally intensive tasks, such as artificial intelligence algorithms, making the deep neural network compiler for FPGAs a hot topic. This FPG...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2023 3rd International Conference on Electronic Information Engineering and Computer Communication (EIECC) s. 677 - 681
Hlavní autori: Tian, Jing, Shi, Tianjie, Zhao, Yixuan, Liu, Feiyang
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 22.12.2023
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In embedded systems, the low-latency, low-power consumption, and reconfigurable FPGA component undertakes the inference operations of a large number of computationally intensive tasks, such as artificial intelligence algorithms, making the deep neural network compiler for FPGAs a hot topic. This FPGA compiler can automatically optimize the model calculation diagrams and hardware structure design, and can achieve the end-to-end deployment of neural network models, efficiently and rapidly. This paper summarizes the general framework structure of the neural network compiler for FPGAs, focuses on the development of three representative compilers, compares the deployment performance of specific algorithms on the FPGA platform, summarizes the development advantages and limitations, and finally proposes its application prospects and important research directions in embedded systems, providing a theoretical basis for the rapid iteration and deployment of artificial intelligence algorithms in embedded computing systems.
DOI:10.1109/EIECC60864.2023.10456741