Algorithms, Correcting Biases

A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight r...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Social research Ročník 86; číslo 2; s. 499 - 511
Hlavní autor: Sunstein, Cass R.
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Johns Hopkins University Press 01.07.2019
Témata:
ISSN:0037-783X, 1944-768X, 1944-768X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of "availability bias," a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0037-783X
1944-768X
1944-768X
DOI:10.1353/sor.2019.0024