Transparency and explainability of AI systems: From ethical guidelines to requirements

•The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.•A model and a template are proposed for the systematic definition of explainability requirements.•Empirical evaluation of the model and the template revealed important practices.•When defining explai...

Full description

Saved in:
Bibliographic Details
Published in:Information and software technology Vol. 159; p. 107197
Main Authors: Balasubramaniam, Nagadivya, Kauppinen, Marjo, Rannisto, Antti, Hiekkanen, Kari, Kujala, Sari
Format: Journal Article
Language:English
Published: Elsevier B.V 01.07.2023
Subjects:
ISSN:0950-5849, 1873-6025
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.•A model and a template are proposed for the systematic definition of explainability requirements.•Empirical evaluation of the model and the template revealed important practices.•When defining explainability requirements, it is important to use multi-disciplinary teams.•Multi-disciplinary teams help to specify the purpose of the AI system and to analyze possible negative consequences of the system. Recent studies have highlighted transparency and explainability as important quality requirements of AI systems. However, there are still relatively few case studies that describe the current state of defining these quality requirements in practice. This study consisted of two phases. The first goal of our study was to explore what ethical guidelines organizations have defined for the development of transparent and explainable AI systems and then we investigated how explainability requirements can be defined in practice. In the first phase, we analyzed the ethical guidelines in 16 organizations representing different industries and public sector. Then, we conducted an empirical study to evaluate the results of the first phase with practitioners. The analysis of the ethical guidelines revealed that the importance of transparency is highlighted by almost all of the organizations and explainability is considered as an integral part of transparency. To support the definition of explainability requirements, we propose a model of explainability components for identifying explainability needs and a template for representing explainability requirements. The paper also describes the lessons we learned from applying the model and the template in practice. For researchers, this paper provides insights into what organizations consider important in the transparency and, in particular, explainability of AI systems. For practitioners, this study suggests a systematic and structured way to define explainability requirements of AI systems. Furthermore, the results emphasize a set of good practices that help to define the explainability of AI systems.
ISSN:0950-5849
1873-6025
DOI:10.1016/j.infsof.2023.107197