Bibliographic Details
| Title: |
Using Permutation-Based Feature Importance for Improved Machine Learning Model Performance at Reduced Costs |
| Authors: |
Adam Khan, Asad Ali, Jahangir Khan, Fasee Ullah, Muhammad Faheem |
| Source: |
IEEE Access, Vol 13, Pp 36421-36435 (2025) |
| Publisher Information: |
Institute of Electrical and Electronics Engineers (IEEE), 2025. |
| Publication Year: |
2025 |
| Subject Terms: |
default settings, predictive accuracy, machine learning (ML), software fault prediction (SFP), hyperparameter, permutation feature importance (PFI), Model-agnostic techniques, Electrical engineering. Electronics. Nuclear engineering, computational cost, TK1-9971 |
| Description: |
In Software Quality Assurance (SQA), predicting defect-prone software modules is essential for ensuring software reliability and consistency. This task is commonly achieved through Machine Learning (ML) techniques, but improving model performance typically incurs significant computational costs. These high computational costs and uncertain payoffs make most Software engineering researchers reluctant to optimize ML models. This creates a need for novel techniques that can achieve near-optimal performance of hyperparameter settings while maintaining the computational efficiency of default settings. To address this, we employed five ML models, Decision Tree, Ranger, Random Forest, Support Vector Machine, and k-nearest Neighbors, and optimized their parameters using the random search technique. Our experiments covered six diverse Software Fault Prediction (SFP) datasets, encompassing various software features, application domains, and defect patterns, to evaluate the approach’s generalizability and effectiveness. Moreover, the Permutation Feature Importance (PFI)-based model-agnostic method was employed to identify the top ten features most critical for model accuracy and efficiency. These selected features were used to retrain the ML models without hyperparameters (default settings) to determine whether similar performance could be achieved at low computational cost. The results show an average accuracy improvement of 77.39% and a 92.02% reduction in computational cost. The most important case attained a 99.25% accuracy improvement and a 96.77% cost reduction. Such results clearly show that PFI-based feature selection is capable of high performance at a fraction of computational cost, offering an efficient solution for software engineers to optimize ML models. |
| Document Type: |
Article |
| ISSN: |
2169-3536 |
| DOI: |
10.1109/access.2025.3544625 |
| Access URL: |
https://doaj.org/article/e64084339e72482598b9a0278242c824 |
| Rights: |
CC BY |
| Accession Number: |
edsair.doi.dedup.....b19ca2efd42cfbeb3722f9b965c549e0 |
| Database: |
OpenAIRE |