Podrobná bibliografie
| Název: |
A comparative analysis of ML techniques for bug report classification |
| Autoři: |
Laiq, Muhammad, Ali, Nauman Bin, Börstler, Jürgen, Engström, Emelie |
| Přispěvatelé: |
Lund University, Faculty of Engineering, LTH, Departments at LTH, Department of Computer Science, Software Engineering Research Group, Lunds universitet, Lunds Tekniska Högskola, Institutioner vid LTH, Institutionen för datavetenskap, Programvarusystem, Originator, Lund University, Profile areas and other strong research environments, Strategic research areas (SRA), ELLIIT: the Linköping-Lund initiative on IT and mobile communication, Lunds universitet, Profilområden och andra starka forskningsmiljöer, Strategiska forskningsområden (SFO), ELLIIT: the Linköping-Lund initiative on IT and mobile communication, Originator, Lund University, Faculty of Engineering, LTH, LTH Profile areas, LTH Profile Area: AI and Digitalization, Lunds universitet, Lunds Tekniska Högskola, LTH profilområden, LTH profilområde: AI och digitalisering, Originator |
| Zdroj: |
Journal of Systems and Software. 227 |
| Témata: |
Natural Sciences, Computer and Information Sciences, Software Engineering, Naturvetenskap, Data- och informationsvetenskap (Datateknik), Programvaruteknik, Computer Engineering, Datorteknik |
| Popis: |
Several studies have evaluated various ML techniques and found promising results in classifying bug reports. However, these studies have used different evaluation designs, making it difficult to compare their results. Furthermore, they have focused primarily on accuracy and did not consider other potentially relevant factors such as generalizability, explainability, and maintenance cost. These two aspects make it difficult for practitioners and researchers to choose an appropriate ML technique for a given context. Therefore, we compare promising ML techniques against practitioners’ concerns using evaluation criteria that go beyond accuracy. Based on an existing framework for adopting ML techniques, we developed an evaluation framework for ML techniques for bug report classification. We used this framework to compare nine ML techniques on three datasets. The results enable a tradeoff analysis between various promising ML techniques. The results show that an ML technique with the highest predictive accuracy might not be the most suitable technique for some contexts. The overall approach presented in the paper supports making informed decisions when choosing ML techniques. It is not locked to the specific techniques, datasets, or factors we have selected here, and others could easily use and adapt it for additional techniques or concerns. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board. |
| Přístupová URL adresa: |
https://doi.org/10.1016/j.jss.2025.112457 |
| Databáze: |
SwePub |