Shapley-Lorenz eXplainable Artificial Intelligence

•A new global eXplainable Artificial Intelligence method is proposed.•Our method is based on the use of Shapley values and Lorenz Zonoid decomposition.•The derived variable importance criterion fulfills explainability requirement.•The application to bitcoin data shows the above mentioned advantages....

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Expert systems with applications Ročník 167; s. 114104
Hlavní autoři: Giudici, Paolo, Raffinetti, Emanuela
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Elsevier Ltd 01.04.2021
Elsevier BV
Témata:
ISSN:0957-4174, 1873-6793
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•A new global eXplainable Artificial Intelligence method is proposed.•Our method is based on the use of Shapley values and Lorenz Zonoid decomposition.•The derived variable importance criterion fulfills explainability requirement.•The application to bitcoin data shows the above mentioned advantages. Explainability of artificial intelligence methods has become a crucial issue, especially in the most regulated fields, such as health and finance. In this paper, we provide a global explainable AI method which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions. This allows the resulting Shapley-Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criterion that combines predictive accuracy with explainability, using a normalised and easy to interpret metric. The proposed decomposition is illustrated within the context of a real financial problem: the prediction of bitcoin prices.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.114104