A Validation Methodology for XAI Decision Support Systems Against Relational Domain Properties

ABSTRACT The global adoption of artificial intelligence (AI) has increased dramatically in recent years, becoming commonplace in many fields. Such a pervasiveness has led to changes in how AI is perceived, strengthening discussions on its societal consequences. Thus, a new class of requirements for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of software : evolution and process Jg. 37; H. 10
Hauptverfasser: De Angelis, Emanuele, De Angelis, Guglielmo, Mongelli, Maurizio, Proietti, Maurizio
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Chichester Wiley Subscription Services, Inc 01.10.2025
Schlagworte:
ISSN:2047-7473, 2047-7481
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:ABSTRACT The global adoption of artificial intelligence (AI) has increased dramatically in recent years, becoming commonplace in many fields. Such a pervasiveness has led to changes in how AI is perceived, strengthening discussions on its societal consequences. Thus, a new class of requirements for AI‐based solutions emerged. Broadly speaking, those on “explainability” aim to provide a transparent representation of the (often opaque) reasoning method that an AI‐based solution uses when prompted. This work presents a methodology for validating a class of explainable AI (XAI) models, called deterministic rule‐based models, which are used for expressing an explainable approximation of classifiers based on machine learning. The validation methodology combines logical deduction with constraint‐based reasoning in numerical domains, and it either succeeds or returns quantitative estimations of the invalid deviations found. This information allows us to assess the correctness of an XAI model, or in the case of deviations, to evaluate if it still can be deemed acceptable. The validation methodology has been applied to a simulation‐based study where the decision‐making process copes with the spread of SARS‐COV‐2 inside a railway station. The considered case study is a controlled but nontrivial example that shows the overall applicability of the methodology.
Bibliographie:This work was supported by the project OPENNESS (N. A0375‐2020‐36616) funded by POR FESR LAZIO 2014‐2020 – GRUPPI DI RICERCA 2020, Future Artificial Intelligence Research (FAIR) (N. PE0000013) funded by Italian Recovery and Resilience Plan, and Gruppo Nazionale per il Calcolo Scientifico INdAM.
Funding
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2047-7473
2047-7481
DOI:10.1002/smr.70054