People are averse to machines making moral decisions

Do people want autonomous machines making moral decisions? Nine studies suggest that that the answer is ‘no’—in part because machines lack a complete mind. Studies 1–6 find that people are averse to machines making morally-relevant driving, legal, medical, and military decisions, and that this avers...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Cognition Ročník 181; s. 21 - 34
Hlavní autoři: Bigman, Yochanan E., Gray, Kurt
Médium: Journal Article
Jazyk:angličtina
Vydáno: Netherlands Elsevier B.V 01.12.2018
Elsevier Science Ltd
Témata:
ISSN:0010-0277, 1873-7838, 1873-7838
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Do people want autonomous machines making moral decisions? Nine studies suggest that that the answer is ‘no’—in part because machines lack a complete mind. Studies 1–6 find that people are averse to machines making morally-relevant driving, legal, medical, and military decisions, and that this aversion is mediated by the perception that machines can neither fully think nor feel. Studies 5–6 find that this aversion exists even when moral decisions have positive outcomes. Studies 7–9 briefly investigate three potential routes to increasing the acceptability of machine moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’ perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9). Although some of these routes show promise, the aversion to machine moral decision-making is difficult to eliminate. This aversion may prove challenging for the integration of autonomous technology in moral domains including medicine, the law, the military, and self-driving vehicles.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0010-0277
1873-7838
1873-7838
DOI:10.1016/j.cognition.2018.08.003