Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability

Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore la...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of information management Jg. 69; S. 102538
Hauptverfasser: Herm, Lukas-Valentin, Heinrich, Kai, Wanner, Jonas, Janiesch, Christian
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Ltd 01.04.2023
Schlagworte:
ISSN:0268-4012, 1873-4707
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception. •Theoretical algorithm interpretability does not entail perceived explainability.•Tradeoff can be characterized by a group structure rather than a curve.•Tree-based machine learning algorithms achieve best explainability results.•While performance distance increases for complex datasets, explainability distance decreases.•Local XAI augmentations requiring low cognitive effort fare better with end users.
ISSN:0268-4012
1873-4707
DOI:10.1016/j.ijinfomgt.2022.102538