(Mis)Communicating with our AI Systems

Saved in:
Bibliographic Details
Title: (Mis)Communicating with our AI Systems
Authors: Laura Cros Vila, Bob Sturm
Source: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. :1-9
Publisher Information: ACM, 2025.
Publication Year: 2025
Subject Terms: Datavetenskap (datalogi), Computer Sciences, Communication, Explanation, Explainable AI, Dialog, Mutual-Understanding, Conversation, Miscommunication, Explainability
Description: Explainable Artificial Intelligence (XAI) is a discipline concerned with understanding predictions of AI systems. What is ultimately desired from XAI methods is for an AI system to link its input and output in a way that is interpretable with reference to the environment in which it is applied. A variety of methods have been proposed, but we argue in this paper that what has yet to be considered is miscommunication: the failure to convey and/or interpret an explanation accurately. XAI can be seen as a communication process and thus looking at how humans explain things to each other can provide guidance to its application and evaluation. We motivate a specific model of communication to help identify essential components of the process, and show the critical importance for establishing common ground, i.e., shared mutual knowledge, beliefs, and assumptions of the participants communicating. Part of ISBN 9798400713941QC 20250515
Document Type: Article
Conference object
File Description: application/pdf
DOI: 10.1145/3706598.3713771
Access URL: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-363375
Rights: CC BY
Accession Number: edsair.doi.dedup.....027eee4a0785d25d14e528f6e70a8f3e
Database: OpenAIRE
Description
Abstract:Explainable Artificial Intelligence (XAI) is a discipline concerned with understanding predictions of AI systems. What is ultimately desired from XAI methods is for an AI system to link its input and output in a way that is interpretable with reference to the environment in which it is applied. A variety of methods have been proposed, but we argue in this paper that what has yet to be considered is miscommunication: the failure to convey and/or interpret an explanation accurately. XAI can be seen as a communication process and thus looking at how humans explain things to each other can provide guidance to its application and evaluation. We motivate a specific model of communication to help identify essential components of the process, and show the critical importance for establishing common ground, i.e., shared mutual knowledge, beliefs, and assumptions of the participants communicating. Part of ISBN 9798400713941QC 20250515
DOI:10.1145/3706598.3713771