Podrobná bibliografie
| Název: |
TinyFoA: Memory Efficient Forward-Only Algorithm for On-Device Learning |
| Autoři: |
Huang, Baichuan, Aminifar, Amir |
| Přispěvatelé: |
Lund University, Faculty of Engineering, LTH, Departments at LTH, Department of Electrical and Information Technology, Lunds universitet, Lunds Tekniska Högskola, Institutioner vid LTH, Institutionen för elektro- och informationsteknik, Originator, Lund University, Profile areas and other strong research environments, Strategic research areas (SRA), ELLIIT: the Linköping-Lund initiative on IT and mobile communication, Lunds universitet, Profilområden och andra starka forskningsmiljöer, Strategiska forskningsområden (SFO), ELLIIT: the Linköping-Lund initiative on IT and mobile communication, Originator |
| Zdroj: |
Thirty-Ninth AAAI Conference on Artificial Intelligence Thirty-Seventh Conference on Innovative Applications of Artificial Intelligence Fifteenth Symposium on Educational Advances in Artificial Intelligence Proceedings of the AAAI Conference on Artificial Intelligence 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025. 39(16):17377-17385 |
| Témata: |
Natural Sciences, Computer and Information Sciences, Computer Sciences, Naturvetenskap, Data- och informationsvetenskap (Datateknik), Datavetenskap (Datalogi), Computer Engineering, Datorteknik |
| Popis: |
Forward-only algorithms offer a promising memory-efficient alternative to Backpropagation (BP) for on-device training. However, state-of-the-art forward-only algorithms, e.g., Forward-Forward (FF), still require a substantial amount of memory during the training process, often exceeding the limits of mobile edge and Internet of Things (IoT) devices. At the same time, existing memory-optimization techniques, e.g., binarizing parameters and activations, are mainly designed for BP, hence significantly degrading the classification performance when applied to state-of-the-art forward-only algorithms. In this paper, we propose a memory-efficient forward-only algorithm called TinyFoA, to reduce dynamic memory overhead in the training process. Our TinyFoA optimizes the memory efficiency not only by layer-wise training but also by partially updating each layer, as well as by binarizing the weights and the activations. We extensively evaluate our proposed TinyFoA against BP and other forward-only algorithms and demonstrate its effectiveness and superiority compared to state-of-the-art forward-only algorithms in terms of classification performance and training memory overhead, reducing the memory overheads by an order of magnitude. |
| Přístupová URL adresa: |
https://doi.org/10.1609/aaai.v39i16.33910 |
| Databáze: |
SwePub |