Unlocking the black box: Enhancing human-AI collaboration in high-stakes healthcare scenarios through explainable AI

Despite the advanced predictive capabilities of artificial intelligence (AI) systems, their inherent opacity often leaves users confused about the rationale behind their outputs. We investigate the challenge of AI opacity, which undermines user trust and the effectiveness of clinical judgment in hea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Technological forecasting & social change Jg. 219; S. 124265
Hauptverfasser: Hassan, Reda, Nguyen, Nhien, Finserås, Stine Rasdal, Adde, Lars, Strümke, Inga, Støen, Ragnhild
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.10.2025
Schlagworte:
ISSN:0040-1625
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Despite the advanced predictive capabilities of artificial intelligence (AI) systems, their inherent opacity often leaves users confused about the rationale behind their outputs. We investigate the challenge of AI opacity, which undermines user trust and the effectiveness of clinical judgment in healthcare. We demonstrate how human experts make judgment in high-stakes scenarios where their judgment diverges from AI predictions, emphasizing the need for explainability to enhance clinical judgment and trust in AI systems. We used a scenario-based methodology, conducting 28 semi-structured interviews and observations with clinicians from Norway and Egypt. Our analysis revealed that, during the process of forming judgments, human experts engage in AI interrogation practices when faced with opaque AI systems. Obtaining explainability from AI systems leads to increased interrogation practices aimed at gaining a deeper understanding of AI predictions. With the introduction of explainable AI (XAI), experts demonstrate greater trust in the AI system, show a readiness to learn from AI, and may reconsider or update their initial judgments when they contradict AI predictions. •Experts reconsider/update initial judgments when provided with AI explainability•XAI aids in understanding AI predictions, enhancing clinical judgments•Scenario-based methodology, a novel approach from psychology and HCI•Overreliance on AI for evaluations may limit organizational learning capacity•Guidelines for designing human-AI interactions to support expert judgment
ISSN:0040-1625
DOI:10.1016/j.techfore.2025.124265