Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent

This study employs an experiment to test subjects' perceptions of an artificial intelligence (AI) crime-predicting agent that produces clearly racist predictions. It used a 2 (human crime predictor/AI crime predictor) x 2 (high/low seriousness of crime) design to test the relationship between t...

Full description

Saved in:
Bibliographic Details
Published in:Computers in human behavior Vol. 100; pp. 79 - 84
Main Authors: Hong, Joo-Wha, Williams, Dmitri
Format: Journal Article
Language:English
Published: Elmsford Elsevier Ltd 01.11.2019
Elsevier Science Ltd
Subjects:
ISSN:0747-5632, 1873-7692
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study employs an experiment to test subjects' perceptions of an artificial intelligence (AI) crime-predicting agent that produces clearly racist predictions. It used a 2 (human crime predictor/AI crime predictor) x 2 (high/low seriousness of crime) design to test the relationship between the level of autonomy and responsibility for the unjust results. The seriousness of crime was manipulated to examine the relationship between the perceived threat and trust in the authority's decisions. Participants (N = 334) responded to an online questionnaire after reading one of four scenarios with the same story depicting a crime predictor unjustly reporting a higher likelihood of subsequent crimes for a black defendant than for a white defendant for similar crimes. The results indicate that people think that an AI crime predictor has significantly less autonomy than a human crime predictor. However, both the identity of the crime predictor and the seriousness of the crime showed insignificant results on the level of responsibility assigned to the predictor. Also, a clear positive relationship between autonomy and responsibility was found in both human and AI crime predictor scenarios. The implications of the findings for applications and theory are discussed. •People think that an AI crime predictor has significantly less autonomy than a human crime predictor.•No difference was found between the type of crime predictor and between crime seriousness for assigned responsibility.•A clear positive relationship between autonomy and responsibility was found in both human and AI crime predictor scenarios.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0747-5632
1873-7692
DOI:10.1016/j.chb.2019.06.012