Reactive or Proactive? How Robots Should Explain Failures

As robots tackle increasingly complex tasks, the need for explanations becomes essential for gaining trust and acceptance. Explainable robotic systems should not only elucidate failures when they occur but also predict and preemptively explain potential issues. This paper compares explanations from...

Full description

Saved in:
Bibliographic Details
Published in:2024 19th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 413 - 422
Main Authors: LeMasurier, Gregory, Gautam, Alvika, Han, Zhao, Crandall, Jacob W., Yanco, Holly A.
Format: Conference Proceeding
Language:English
Published: ACM 11.03.2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As robots tackle increasingly complex tasks, the need for explanations becomes essential for gaining trust and acceptance. Explainable robotic systems should not only elucidate failures when they occur but also predict and preemptively explain potential issues. This paper compares explanations from Reactive Systems, which detect and explain failures after they occur, to Proactive Systems, which predict and explain issues in advance. Our study reveals that the Proactive System fosters higher perceived intelligence and trust and its explanations were rated more understandable and timely. Our findings aim to advance the design of effective robot explanation systems, allowing people to diagnose and provide assistance for problems that may prevent a robot from finishing its task.CCS CONCEPTS* Human-centered computing → Empirical studies in interaction design; * Computer systems organization → Robotics.
DOI:10.1145/3610977.3634963