AdaQAT: Adaptive Bit-Width Quantization-Aware Training

Uloženo v:
Podrobná bibliografie
Název: AdaQAT: Adaptive Bit-Width Quantization-Aware Training
Autoři: Gernigon, Cédric, Filip, Silviu-Ioan, Sentieys, Olivier, Coggiola, Clément, Bruno, Mickael
Přispěvatelé: Gernigon, Cédric
Zdroj: 2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS). :442-446
Publication Status: Preprint
Informace o vydavateli: IEEE, 2024.
Rok vydání: 2024
Témata: [INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], FOS: Computer and information sciences, Computer Science - Machine Learning, Artificial Intelligence (cs.AI), Computer Science - Artificial Intelligence, Adaptive Bit-Width Optimization, Neural Network Compression, [INFO.INFO-AO] Computer Science [cs]/Computer Arithmetic, Quantization Aware Training, Machine Learning (cs.LG)
Popis: Large-scale deep neural networks (DNNs) have achieved remarkable success in many application scenarios. However, high computational complexity and energy costs of modern DNNs make their deployment on edge devices challenging. Model quantization is a common approach to deal with deployment constraints, but searching for optimized bit-widths can be challenging. In this work, we present Adaptive Bit-Width Quantization Aware Training (AdaQAT), a learning-based method that automatically optimizes weight and activation signal bit-widths during training for more efficient DNN inference. We use relaxed real-valued bit-widths that are updated using a gradient descent rule, but are otherwise discretized for all quantization operations. The result is a simple and flexible QAT approach for mixed-precision uniform quantization problems. Compared to other methods that are generally designed to be run on a pretrained network, AdaQAT works well in both training from scratch and fine-tuning scenarios.Initial results on the CIFAR-10 and ImageNet datasets using ResNet20 and ResNet18 models, respectively, indicate that our method is competitive with other state-of-the-art mixed-precision quantization approaches.
Druh dokumentu: Article
Conference object
Popis souboru: application/pdf
DOI: 10.1109/aicas59952.2024.10595895
DOI: 10.48550/arxiv.2404.16876
Přístupová URL adresa: http://arxiv.org/abs/2404.16876
https://hal.science/hal-04549245v1
Rights: STM Policy #29
arXiv Non-Exclusive Distribution
CC BY
Přístupové číslo: edsair.doi.dedup.....ca0474eb4a607bb5a7c11c5e23f69b9e
Databáze: OpenAIRE
Popis
Abstrakt:Large-scale deep neural networks (DNNs) have achieved remarkable success in many application scenarios. However, high computational complexity and energy costs of modern DNNs make their deployment on edge devices challenging. Model quantization is a common approach to deal with deployment constraints, but searching for optimized bit-widths can be challenging. In this work, we present Adaptive Bit-Width Quantization Aware Training (AdaQAT), a learning-based method that automatically optimizes weight and activation signal bit-widths during training for more efficient DNN inference. We use relaxed real-valued bit-widths that are updated using a gradient descent rule, but are otherwise discretized for all quantization operations. The result is a simple and flexible QAT approach for mixed-precision uniform quantization problems. Compared to other methods that are generally designed to be run on a pretrained network, AdaQAT works well in both training from scratch and fine-tuning scenarios.Initial results on the CIFAR-10 and ImageNet datasets using ResNet20 and ResNet18 models, respectively, indicate that our method is competitive with other state-of-the-art mixed-precision quantization approaches.
DOI:10.1109/aicas59952.2024.10595895