Competence Measure Enhanced Ensemble Learning Voting Schemes

Gespeichert in:
Bibliographische Detailangaben
Titel: Competence Measure Enhanced Ensemble Learning Voting Schemes
Autoren: McFadden, Francesca
Verlagsinformationen: Maryland Shared Open Access Repository, 2025.
Publikationsjahr: 2025
Schlagwörter: class representation, differing underlying training data, training data distribution across the classifiers, Learning Voting Schemes
Beschreibung: Ensemble Learning Methods Ensemble learning methods use the predictions of multiple classifier models. – A well-formed ensemble should be formed from classifiers with various assumptions, e.g., differing underlying training data, feature space selection, and therefore decision boundaries. A voting scheme is used to weigh the decisions of the individual classifier models to determine how they may be combined, fused, or selected among to predict class. – Voting schemes often consider individual reported classifier confidence in predictions. Complementary features, class representation, and training data distribution across the classifiers are to an advantage, but are not being fully exploited with existing schema. Network approaches attempting to learn the complementary traits of classifiers may result in loss of explainability to end users.
Publikationsart: Other literature type
Sprache: English
DOI: 10.13016/m27j2d-feg3
Dokumentencode: edsair.doi...........d55a3db236c32e18baca18a8ddcbb682
Datenbank: OpenAIRE
Beschreibung
Abstract:Ensemble Learning Methods Ensemble learning methods use the predictions of multiple classifier models. – A well-formed ensemble should be formed from classifiers with various assumptions, e.g., differing underlying training data, feature space selection, and therefore decision boundaries. A voting scheme is used to weigh the decisions of the individual classifier models to determine how they may be combined, fused, or selected among to predict class. – Voting schemes often consider individual reported classifier confidence in predictions. Complementary features, class representation, and training data distribution across the classifiers are to an advantage, but are not being fully exploited with existing schema. Network approaches attempting to learn the complementary traits of classifiers may result in loss of explainability to end users.
DOI:10.13016/m27j2d-feg3