GOAT method: Green Orthogonal Array Tuning method

This paper is a natural extension of our previous work and introduces an eco-efficient and integrated approach to hyper-parameter optimization (HPO) using Taguchi’s orthogonal array tuning method (OATM), which forms the basis for our GOAT (Green Orthogonal Array Tuning) method across leading models...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Alexandria engineering journal Ročník 133; s. 13 - 41
Hlavní autoři: Ranković, Nevena, Ranković, Dragica
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.12.2025
Elsevier
Témata:
ISSN:1110-0168
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper is a natural extension of our previous work and introduces an eco-efficient and integrated approach to hyper-parameter optimization (HPO) using Taguchi’s orthogonal array tuning method (OATM), which forms the basis for our GOAT (Green Orthogonal Array Tuning) method across leading models in Machine Learning (ML), Deep Learning (DL), and Graph Neural Networks (GNNs): XGBoost, LightGBM, CatBoost, LSTM, GRU, GGNN, and GGSNN. Taguchi’s method requires fewer than 10 experiments and just 11 s of running time for all models, demonstrating its efficiency. GGSNN emerges as the best-performing model overall. A comprehensive case study on software estimation, using 46 publicly available datasets, highlights the method’s ability to reduce time and energy consumption while improving accuracy, promoting sustainable practices and high-impact real-world applications. •Hyperparameter tuning transforms good models into state-of-the-art performers.•Evaluation shows that GOAT method outperforms existing frameworks.•GOAT method identified the top 5 hyper-parameters for the best models in machine learning, deep learning, and graph neural networks.•GOAT method requires fewer than 10 experiments and 11 s, making it eco-efficient.
ISSN:1110-0168
DOI:10.1016/j.aej.2025.10.044