Shapley-Lorenz eXplainable Artificial Intelligence

•A new global eXplainable Artificial Intelligence method is proposed.•Our method is based on the use of Shapley values and Lorenz Zonoid decomposition.•The derived variable importance criterion fulfills explainability requirement.•The application to bitcoin data shows the above mentioned advantages....

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 167; p. 114104
Main Authors: Giudici, Paolo, Raffinetti, Emanuela
Format: Journal Article
Language:English
Published: New York Elsevier Ltd 01.04.2021
Elsevier BV
Subjects:
ISSN:0957-4174, 1873-6793
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A new global eXplainable Artificial Intelligence method is proposed.•Our method is based on the use of Shapley values and Lorenz Zonoid decomposition.•The derived variable importance criterion fulfills explainability requirement.•The application to bitcoin data shows the above mentioned advantages. Explainability of artificial intelligence methods has become a crucial issue, especially in the most regulated fields, such as health and finance. In this paper, we provide a global explainable AI method which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions. This allows the resulting Shapley-Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criterion that combines predictive accuracy with explainability, using a normalised and easy to interpret metric. The proposed decomposition is illustrated within the context of a real financial problem: the prediction of bitcoin prices.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.114104