Locating fault for AI harms: a systems theory of foreseeability, reasonable care and causal responsibility in the AI value chain

This paper presents an original perspective on fault and responsibility for harms caused by artificial intelligence (AI) systems. Scholarship on liability for AI harms highlights the difficulties that doctrines like negligence may encounter in attributing responsibility across complex AI value chain...

Full description

Saved in:
Bibliographic Details
Published in:Law, innovation and technology Vol. 17; no. 1; pp. 103 - 138
Main Authors: Fraser, Henry L., Suzor, Nicolas P.
Format: Journal Article
Language:English
Published: Abingdon Routledge 02.01.2025
Taylor & Francis Ltd
Subjects:
ISSN:1757-9961, 1757-997X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents an original perspective on fault and responsibility for harms caused by artificial intelligence (AI) systems. Scholarship on liability for AI harms highlights the difficulties that doctrines like negligence may encounter in attributing responsibility across complex AI value chains. Drawing on the theory of 'system safety', this paper argues that these difficulties can be diminished by conceptualising AI hazards as a set of socio-technical conditions rather than specific aberrant outputs ('errors') with discrete technical causes. System affordances, use context, and organisational arrangements are all key risk factors. Animated by case studies of AI harms and near misses, the paper clarifies what is 'reasonably foreseeable' about AI harms, and to which value chain participants. It also identifies various kinds of 'reasonable care' that different actors can exercise to avert harm. This socio-technical perspective makes it easier to apply concepts that are vital not only to negligence liability, and highlights key priorities for regulating 'responsible AI'.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1757-9961
1757-997X
DOI:10.1080/17579961.2025.2469345