The Effect of Voice and Repair Strategy on Trust Formation and Repair in Human-Robot Interaction

Saved in:
Bibliographic Details
Title: The Effect of Voice and Repair Strategy on Trust Formation and Repair in Human-Robot Interaction
Authors: Romeo, Marta, Torre, Ilaria, 1989, Le Maguer, Sebastien, Sleat, Alexander, Cangelosi, Angelo, Leite, Iolanda
Source: ACM Transactions on Human-Robot Interaction. 14(2)
Subject Terms: Auditory feedback, CCS Concepts, Human-centered computing -> Human computer interaction (HCI)
Description: Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several factors and combinations thereof can contribute to the formation of trust and, importantly in the case of machines that work with a certain margin of error, to its maintenance and repair after it has been breached. In this article, we present the results of a study aimed at investigating the role of robot voice and chosen repair strategy on trust formation and repair in a collaborative task. People helped a robot navigate through a maze, and the robot made mistakes at pre-defined points during the navigation. Via in-game behaviour and follow-up questionnaires, we could measure people's trust towards the robot. We found that people trusted the robot speaking with a state-of-the-art synthetic voice more than with the default robot voice in the game, even though they indicated the opposite in the questionnaires. Additionally, we found that three repair strategies that people use in human-human interaction (justification of the mistake, promise to be better and denial of the mistake) work also in human-robot interaction.
File Description: electronic
Access URL: https://research.chalmers.se/publication/546010
https://research.chalmers.se/publication/546010/file/546010_Fulltext.pdf
Database: SwePub
Description
Abstract:Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several factors and combinations thereof can contribute to the formation of trust and, importantly in the case of machines that work with a certain margin of error, to its maintenance and repair after it has been breached. In this article, we present the results of a study aimed at investigating the role of robot voice and chosen repair strategy on trust formation and repair in a collaborative task. People helped a robot navigate through a maze, and the robot made mistakes at pre-defined points during the navigation. Via in-game behaviour and follow-up questionnaires, we could measure people's trust towards the robot. We found that people trusted the robot speaking with a state-of-the-art synthetic voice more than with the default robot voice in the game, even though they indicated the opposite in the questionnaires. Additionally, we found that three repair strategies that people use in human-human interaction (justification of the mistake, promise to be better and denial of the mistake) work also in human-robot interaction.
ISSN:25739522
DOI:10.1145/3711938