Competence Measure Enhnaced Ensemble Learning Voting Schemes

Uloženo v:
Podrobná bibliografie
Název: Competence Measure Enhnaced Ensemble Learning Voting Schemes
Autoři: McFadden, Francesca
Zdroj: The ITEA Journal of Test and Evaluation. 46
Informace o vydavateli: International Test and Evaluation Association, 2025.
Rok vydání: 2025
Témata: class representation, differing underlying training data, training data distribution across the classifiers, Learning Voting Schemes
Popis: Ensemble Learning Methods Ensemble learning methods use the predictions of multiple classifier models. – A well-formed ensemble should be formed from classifiers with various assumptions, e.g., differing underlying training data, feature space selection, and therefore decision boundaries. A voting scheme is used to weigh the decisions of the individual classifier models to determine how they may be combined, fused, or selected among to predict class. – Voting schemes often consider individual reported classifier confidence in predictions. Complementary features, class representation, and training data distribution across the classifiers are to an advantage, but are not being fully exploited with existing schema. Network approaches attempting to learn the complementary traits of classifiers may result in loss of explainability to end users.
Druh dokumentu: Article
Other literature type
ISSN: 1054-0229
DOI: 10.61278/itea.46.3.1007
DOI: 10.13016/m27j2d-feg3
Přístupové číslo: edsair.doi.dedup.....47bd4c15058df1c8d381b535a23a7302
Databáze: OpenAIRE
Popis
Abstrakt:Ensemble Learning Methods Ensemble learning methods use the predictions of multiple classifier models. – A well-formed ensemble should be formed from classifiers with various assumptions, e.g., differing underlying training data, feature space selection, and therefore decision boundaries. A voting scheme is used to weigh the decisions of the individual classifier models to determine how they may be combined, fused, or selected among to predict class. – Voting schemes often consider individual reported classifier confidence in predictions. Complementary features, class representation, and training data distribution across the classifiers are to an advantage, but are not being fully exploited with existing schema. Network approaches attempting to learn the complementary traits of classifiers may result in loss of explainability to end users.
ISSN:10540229
DOI:10.61278/itea.46.3.1007