Unlocking the black box: Enhancing human-AI collaboration in high-stakes healthcare scenarios through explainable AI

Despite the advanced predictive capabilities of artificial intelligence (AI) systems, their inherent opacity often leaves users confused about the rationale behind their outputs. We investigate the challenge of AI opacity, which undermines user trust and the effectiveness of clinical judgment in hea...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Technological forecasting & social change Ročník 219; s. 124265
Hlavní autoři: Hassan, Reda, Nguyen, Nhien, Finserås, Stine Rasdal, Adde, Lars, Strümke, Inga, Støen, Ragnhild
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Inc 01.10.2025
Témata:
ISSN:0040-1625
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Despite the advanced predictive capabilities of artificial intelligence (AI) systems, their inherent opacity often leaves users confused about the rationale behind their outputs. We investigate the challenge of AI opacity, which undermines user trust and the effectiveness of clinical judgment in healthcare. We demonstrate how human experts make judgment in high-stakes scenarios where their judgment diverges from AI predictions, emphasizing the need for explainability to enhance clinical judgment and trust in AI systems. We used a scenario-based methodology, conducting 28 semi-structured interviews and observations with clinicians from Norway and Egypt. Our analysis revealed that, during the process of forming judgments, human experts engage in AI interrogation practices when faced with opaque AI systems. Obtaining explainability from AI systems leads to increased interrogation practices aimed at gaining a deeper understanding of AI predictions. With the introduction of explainable AI (XAI), experts demonstrate greater trust in the AI system, show a readiness to learn from AI, and may reconsider or update their initial judgments when they contradict AI predictions. •Experts reconsider/update initial judgments when provided with AI explainability•XAI aids in understanding AI predictions, enhancing clinical judgments•Scenario-based methodology, a novel approach from psychology and HCI•Overreliance on AI for evaluations may limit organizational learning capacity•Guidelines for designing human-AI interactions to support expert judgment
ISSN:0040-1625
DOI:10.1016/j.techfore.2025.124265