How do ML practitioners perceive explainability? an interview study of practices and challenges

Explainable artificial intelligence (XAI) is a field of study that focuses on the development process of AI-based systems while making their decision-making processes understandable and transparent for users. Research already identified explainability as an emerging requirement for AI-based systems...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Empirical software engineering : an international journal Ročník 30; číslo 1; s. 18
Hlavní autoři: Habiba, Umm-e-, Habib, Mohammad Kasra, Bogner, Justus, Fritzsch, Jonas, Wagner, Stefan
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.02.2025
Springer Nature B.V
Témata:
ISSN:1382-3256, 1573-7616
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Explainable artificial intelligence (XAI) is a field of study that focuses on the development process of AI-based systems while making their decision-making processes understandable and transparent for users. Research already identified explainability as an emerging requirement for AI-based systems that use machine learning (ML) techniques. However, there is a notable absence of studies investigating how ML practitioners perceive the concept of explainability, the challenges they encounter, and the potential trade-offs with other quality attributes. In this study, we want to discover how practitioners define explainability for AI-based systems and what challenges they encounter in making them explainable. Furthermore, we explore how explainability interacts with other quality attributes. To this end, we conducted semi-structured interviews with 14 ML practitioners from 11 companies. Our study reveals diverse viewpoints on explainability and applied practices. Results suggest that the importance of explainability lies in enhancing transparency, refining models, and mitigating bias. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanation (LIME) are frequently used by ML practitioners to understand how models work, while tailored approaches are typically adopted to meet the specific requirements of stakeholders. Moreover, we have discerned emerging challenges in eight categories. Issues such as effective communication with non-technical stakeholders and the absence of standardized approaches are frequently stated as recurring hurdles. We contextualize these findings in terms of requirements engineering and conclude that industry currently lacks a standardized framework to address arising explainability needs.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1382-3256
1573-7616
DOI:10.1007/s10664-024-10565-2