From explanations to feature selection: assessing SHAP values as feature selection mechanism

Explainability has become one of the most discussed topics in machine learning research in recent years, and although a lot of methodologies that try to provide explanations to black-box models have been proposed to address such an issue, little discussion has been made on the pre-processing steps i...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings - Brazilian Symposium on Computer Graphics and Image Processing pp. 340 - 347
Main Authors: Marcilio, Wilson E., Eler, Danilo M.
Format: Conference Proceeding
Language:English
Published: IEEE 01.11.2020
Subjects:
ISSN:2377-5416
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Explainability has become one of the most discussed topics in machine learning research in recent years, and although a lot of methodologies that try to provide explanations to black-box models have been proposed to address such an issue, little discussion has been made on the pre-processing steps involving the pipeline of development of machine learning solutions, such as feature selection. In this work, we evaluate a game-theoretic approach used to explain the output of any machine learning model, SHAP, as a feature selection mechanism. In the experiments, we show that besides being able to explain the decisions of a model, it achieves better results than three commonly used feature selection algorithms.
ISSN:2377-5416
DOI:10.1109/SIBGRAPI51738.2020.00053