Multiobjective Linear Ensembles for Robust and Sparse Training of Few-Bit Neural Networks

Uložené v:
Podrobná bibliografia
Názov: Multiobjective Linear Ensembles for Robust and Sparse Training of Few-Bit Neural Networks
Autori: Ambrogio Maria Bernardelli, Stefano Gualandi, Simone Milanesi, Hoong Chuin Lau, Neil Yorke-Smith
Prispievatelia: Bernardelli, Ambrogio Maria, Gualandi, Stefano, Milanesi, Simone, Lau, Hoong Chuin, Yorke-Smith, Neil
Zdroj: INFORMS Journal on Computing. 37:623-643
Informácie o vydavateľovi: Institute for Operations Research and the Management Sciences (INFORMS), 2025.
Rok vydania: 2025
Predmety: Artificial Intelligence and Robotics, Mixed-integer linear programming, Computer Sciences, Few-shot learning, sparsity, structured ensemble, binarized neural network, Binarized neural networks, Structured ensemble, integer neural network, Multi-objective optimisation, few-shot learning, multiobjective optimization, mixed-integer linear programming, Integer neural networks, Sparsity
Popis: Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training and hyperparameter tuning and simultaneously training and sparsifying the network. We study the case of few-bit discrete-valued neural networks, both binarized neural networks (BNNs) whose values are restricted to ±1 and integer-valued neural networks (INNs) whose values lie in the range [Formula: see text]. Few-bit NNs receive increasing recognition because of their lightweight architecture and ability to run on low-power devices: for example, being implemented using Boolean operations. This paper proposes new methods to improve the training of BNNs and INNs. Our contribution is a multiobjective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output. Our approach results in the training of robust sparsified networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible. We empirically compare this BeMi approach with the current state of the art in solver-based NN training and with traditional gradient-based training, focusing on BNN learning in few-shot contexts. We compare the benefits and drawbacks of INNs versus BNNs, bringing new light to the distribution of weights over the [Formula: see text] interval. Finally, we compare multiobjective versus single-objective training of INNs, showing that robustness and network simplicity can be acquired simultaneously, thus obtaining better test performances. Although the previous state-of-the-art approaches achieve an average accuracy of [Formula: see text] on the Modified National Institute of Standards and Technology data set, the BeMi ensemble approach achieves an average accuracy of 68.4% when trained with 10 images per class and 81.8% when trained with 40 images per class while having up to 75.3% NN links removed. History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms—Discrete. Funding: This research was partially supported by the European Union Horizon 2020 Research and Innovation Programme [Grant 952215]. The work of A. M. Bernardelli is supported by a PhD scholarship funded under the “Programma Operativo Nazionale Ricerca e Innovazione” 2014–2020. Supplemental Material: The software that supports the findings of this study is available within the paper as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2023.0281 ). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/ .
Druh dokumentu: Article
Popis súboru: application/pdf; ELETTRONICO
Jazyk: English
ISSN: 1526-5528
1091-9856
DOI: 10.1287/ijoc.2023.0281
Rights: CC BY
Prístupové číslo: edsair.doi.dedup.....85faca03090de8215fcdffa5042bf107
Databáza: OpenAIRE
Popis
Abstrakt:Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training and hyperparameter tuning and simultaneously training and sparsifying the network. We study the case of few-bit discrete-valued neural networks, both binarized neural networks (BNNs) whose values are restricted to ±1 and integer-valued neural networks (INNs) whose values lie in the range [Formula: see text]. Few-bit NNs receive increasing recognition because of their lightweight architecture and ability to run on low-power devices: for example, being implemented using Boolean operations. This paper proposes new methods to improve the training of BNNs and INNs. Our contribution is a multiobjective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output. Our approach results in the training of robust sparsified networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible. We empirically compare this BeMi approach with the current state of the art in solver-based NN training and with traditional gradient-based training, focusing on BNN learning in few-shot contexts. We compare the benefits and drawbacks of INNs versus BNNs, bringing new light to the distribution of weights over the [Formula: see text] interval. Finally, we compare multiobjective versus single-objective training of INNs, showing that robustness and network simplicity can be acquired simultaneously, thus obtaining better test performances. Although the previous state-of-the-art approaches achieve an average accuracy of [Formula: see text] on the Modified National Institute of Standards and Technology data set, the BeMi ensemble approach achieves an average accuracy of 68.4% when trained with 10 images per class and 81.8% when trained with 40 images per class while having up to 75.3% NN links removed. History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms—Discrete. Funding: This research was partially supported by the European Union Horizon 2020 Research and Innovation Programme [Grant 952215]. The work of A. M. Bernardelli is supported by a PhD scholarship funded under the “Programma Operativo Nazionale Ricerca e Innovazione” 2014–2020. Supplemental Material: The software that supports the findings of this study is available within the paper as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2023.0281 ). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/ .
ISSN:15265528
10919856
DOI:10.1287/ijoc.2023.0281