ETA: An Efficient Training Accelerator for DNNs Based on Hardware-Algorithm Co-Optimization
Recently, the efficient training of deep neural networks (DNNs) on resource-constrained platforms has attracted increasing attention for protecting user privacy. However, it is still a severe challenge since the DNN training involves intensive computations and a large amount of data access. To deal...
Uloženo v:
| Vydáno v: | IEEE transaction on neural networks and learning systems Ročník 34; číslo 10; s. 7660 - 7674 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
IEEE
01.10.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 2162-237X, 2162-2388, 2162-2388 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Recently, the efficient training of deep neural networks (DNNs) on resource-constrained platforms has attracted increasing attention for protecting user privacy. However, it is still a severe challenge since the DNN training involves intensive computations and a large amount of data access. To deal with these issues, in this work, we implement an efficient training accelerator (ETA) on field-programmable gate array (FPGA) by adopting a hardware-algorithm co-optimization approach. A novel training scheme is proposed to effectively train DNNs using 8-bit precision with arbitrary batch sizes, in which a compact but powerful data format and a hardware-oriented normalization layer are introduced. Thus the computational complexity and memory accesses are significantly reduced. In the ETA, a reconfigurable processing element (PE) is designed to support various computational patterns during training while avoiding redundant calculations from nonunit-stride convolutional layers. With a flexible network-on-chip (NoC) and a hierarchical PE array, computational parallelism and data reuse can be fully exploited, and memory accesses are further reduced. In addition, a unified computing core is developed to execute auxiliary layers such as normalization and weight update (WU), which works in a time-multiplexed manner and consumes only a small amount of hardware resources. The experiments show that our training scheme achieves the state-of-the-art accuracy across multiple models, including CIFAR-VGG16, CIFAR-ResNet20, CIFAR-InceptionV3, ResNet18, and ResNet50. Evaluated on three networks (CIFAR-VGG16, CIFAR-ResNet20, and ResNet18), our ETA on Xilinx VC709 FPGA achieves 610.98, 658.64, and 811.24 GOPS in terms of throughput, respectively. Compared with the prior art, our design demonstrates a speedup of <inline-formula> <tex-math notation="LaTeX">3.65\times </tex-math></inline-formula> and an energy efficiency improvement of <inline-formula> <tex-math notation="LaTeX">8.54\times </tex-math></inline-formula> on CIFAR-ResNet20. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 2162-237X 2162-2388 2162-2388 |
| DOI: | 10.1109/TNNLS.2022.3145850 |