Shapley-Lorenz eXplainable Artificial Intelligence

•A new global eXplainable Artificial Intelligence method is proposed.•Our method is based on the use of Shapley values and Lorenz Zonoid decomposition.•The derived variable importance criterion fulfills explainability requirement.•The application to bitcoin data shows the above mentioned advantages....

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Expert systems with applications Ročník 167; s. 114104
Hlavní autori: Giudici, Paolo, Raffinetti, Emanuela
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York Elsevier Ltd 01.04.2021
Elsevier BV
Predmet:
ISSN:0957-4174, 1873-6793
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:•A new global eXplainable Artificial Intelligence method is proposed.•Our method is based on the use of Shapley values and Lorenz Zonoid decomposition.•The derived variable importance criterion fulfills explainability requirement.•The application to bitcoin data shows the above mentioned advantages. Explainability of artificial intelligence methods has become a crucial issue, especially in the most regulated fields, such as health and finance. In this paper, we provide a global explainable AI method which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions. This allows the resulting Shapley-Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criterion that combines predictive accuracy with explainability, using a normalised and easy to interpret metric. The proposed decomposition is illustrated within the context of a real financial problem: the prediction of bitcoin prices.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.114104