Planning with Trust for Human-Robot Collaboration

Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) w...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI) s. 307 - 315
Hlavní autoři: Chen, Min, Nikolaidis, Stefanos, Soh, Harold, Hsu, David, Srinivasa, Siddhartha
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: New York, NY, USA ACM 26.02.2018
Edice:ACM Conferences
Témata:
ISBN:9781450349536, 1450349536
ISSN:2167-2148
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.
ISBN:9781450349536
1450349536
ISSN:2167-2148
DOI:10.1145/3171221.3171264