Exploring users' mental models of conversational agents: a systematic review.

Uloženo v:
Podrobná bibliografie
Název: Exploring users' mental models of conversational agents: a systematic review.
Autoři: Cena, Federica1 (AUTHOR), Grasso, Francesca1 (AUTHOR) fr.grasso@unito.it
Zdroj: Behaviour & Information Technology. Oct2025, p1-24. 24p. 1 Illustration.
Témata: *HUMAN-computer interaction, CONCEPTUAL models, USER experience, CHATBOTS, COMMUNICATION styles
Abstrakt: In recent years, conversational agents (CAs), such as chatbots and voice-based digital assistants, have become increasingly prevalent in everyday life. However, interactions with these agents are often unsatisfactory due to a gap between user expectations and actual experiences, leading to frustration. These discrepancies are strongly influenced by users' mental models – cognitive frameworks helping users understand and predict system behaviour. Despite their importance, mental models remain underexplored in CA research, and no systematic review has yet synthesised findings in this area. We conducted a systematic review of 48 studies published between 2000 and 2023, identified through searches in IEEE Xplore, Scopus, and Web of Science, complemented by backward snowballing. We included peer-reviewed studies that investigate mental models in the context of CAs and excluded works focussing on broader constructs such as UX or perception. Using an HCI lens, we analysed how users' mental models are conceptualised, shaped by user and CA characteristics, and assessed across diverse dialogue systems. Our findings show that users' models are influenced by agents' features such as communication style, embodiment, and role. We highlight open challenges, including methodological inconsistencies across studies and the lack of standardized approaches to evaluating users' mental models. Our findings provide insights for designing more human-centred conversational systems and a foundation for future research. [ABSTRACT FROM AUTHOR]
Copyright of Behaviour & Information Technology is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Business Source Index
Popis
Abstrakt:In recent years, conversational agents (CAs), such as chatbots and voice-based digital assistants, have become increasingly prevalent in everyday life. However, interactions with these agents are often unsatisfactory due to a gap between user expectations and actual experiences, leading to frustration. These discrepancies are strongly influenced by users' mental models – cognitive frameworks helping users understand and predict system behaviour. Despite their importance, mental models remain underexplored in CA research, and no systematic review has yet synthesised findings in this area. We conducted a systematic review of 48 studies published between 2000 and 2023, identified through searches in IEEE Xplore, Scopus, and Web of Science, complemented by backward snowballing. We included peer-reviewed studies that investigate mental models in the context of CAs and excluded works focussing on broader constructs such as UX or perception. Using an HCI lens, we analysed how users' mental models are conceptualised, shaped by user and CA characteristics, and assessed across diverse dialogue systems. Our findings show that users' models are influenced by agents' features such as communication style, embodiment, and role. We highlight open challenges, including methodological inconsistencies across studies and the lack of standardized approaches to evaluating users' mental models. Our findings provide insights for designing more human-centred conversational systems and a foundation for future research. [ABSTRACT FROM AUTHOR]
ISSN:0144929X
DOI:10.1080/0144929x.2025.2573436