Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents

Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Hri '15: ACM/IEEE International Conference on Human-Robot Interaction USB Stick S. 117 - 124
Hauptverfasser: Malle, Bertram F., Scheutz, Matthias, Arnold, Thomas, Voiklis, John, Cusimano, Corey
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: ACM 01.03.2015
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI , which is well placed to help guide the design of social robots.
DOI:10.1145/2696454.2696458