Competence Measure Enhanced Ensemble Learning Voting Schemes

Uloženo v:
Podrobná bibliografie
Název: Competence Measure Enhanced Ensemble Learning Voting Schemes
Autoři: McFadden, Francesca
Informace o vydavateli: Maryland Shared Open Access Repository, 2025.
Rok vydání: 2025
Témata: class representation, differing underlying training data, training data distribution across the classifiers, Learning Voting Schemes
Popis: Ensemble Learning Methods Ensemble learning methods use the predictions of multiple classifier models. – A well-formed ensemble should be formed from classifiers with various assumptions, e.g., differing underlying training data, feature space selection, and therefore decision boundaries. A voting scheme is used to weigh the decisions of the individual classifier models to determine how they may be combined, fused, or selected among to predict class. – Voting schemes often consider individual reported classifier confidence in predictions. Complementary features, class representation, and training data distribution across the classifiers are to an advantage, but are not being fully exploited with existing schema. Network approaches attempting to learn the complementary traits of classifiers may result in loss of explainability to end users.
Druh dokumentu: Other literature type
Jazyk: English
DOI: 10.13016/m27j2d-feg3
Přístupové číslo: edsair.doi...........d55a3db236c32e18baca18a8ddcbb682
Databáze: OpenAIRE
Popis
Abstrakt:Ensemble Learning Methods Ensemble learning methods use the predictions of multiple classifier models. – A well-formed ensemble should be formed from classifiers with various assumptions, e.g., differing underlying training data, feature space selection, and therefore decision boundaries. A voting scheme is used to weigh the decisions of the individual classifier models to determine how they may be combined, fused, or selected among to predict class. – Voting schemes often consider individual reported classifier confidence in predictions. Complementary features, class representation, and training data distribution across the classifiers are to an advantage, but are not being fully exploited with existing schema. Network approaches attempting to learn the complementary traits of classifiers may result in loss of explainability to end users.
DOI:10.13016/m27j2d-feg3