A fairness scale for real-time recidivism forecasts using a national database of convicted offenders
Uložené v:
| Názov: | A fairness scale for real-time recidivism forecasts using a national database of convicted offenders |
|---|---|
| Autori: | Jacob Verrey, Peter Neyroud, Lawrence Sherman, Barak Ariel |
| Prispievatelia: | Apollo - University of Cambridge Repository |
| Zdroj: | Neural Computing and Applications. 37:21607-21657 |
| Informácie o vydavateľovi: | Springer Science and Business Media LLC, 2025. |
| Rok vydania: | 2025 |
| Predmety: | Criminal justice, Fairness, 46 Information and Computing Sciences, 4602 Artificial Intelligence, Networking and Information Technology R&D (NITRD), Recidivism, 4611 Machine Learning, Machine learning, Machine Learning and Artificial Intelligence, Mental health, 4603 Computer Vision and Multimedia Computation, 16 Peace, Justice and Strong Institutions, Forecasting |
| Popis: | This investigation explores whether machine learning can predict recidivism while addressing societal biases. To investigate this, we obtained conviction data from the UK’s Police National Computer (PNC) on 346,685 records between January 1, 2000, and February 3, 2006 (His Majesty’s Inspectorate of Constabulary in Use of the Police National Computer: An inspection of the ACRO Criminal Records Office. His Majesty’s Inspectorate of Constabulary, Birmingham, https://assets-hmicfrs.justiceinspectorates.gov.uk/uploads/police-national-computer-use-acro-criminal-records-office.pdf, 2017). We generate twelve machine learning models—six to forecast general recidivism, and six to forecast violent recidivism—over a 3-year period, evaluated via fivefold cross-validation. Our best-performing models outperform the existing state-of-the-arts, receiving an area under curve (AUC) score of 0.8660 and 0.8375 for general and violent recidivism, respectively. Next, we construct a fairness scale that communicates the semantic and technical trade-offs associated with debiasing a criminal justice forecasting model. We use this scale to debias our best-performing models. Results indicate both models can achieve all five fairness definitions because the metrics measuring these definitions—the statistical range of recall, precision, positive rate, and error balance between demographics—indicate that these scores are within a one percentage point difference of each other. Deployment recommendations and implications are discussed. These include recommended safeguards against false positives, an explication of how these models addressed societal biases, and a case study illustrating how these models can improve existing criminal justice practices. That is, these models may help police identify fewer people in a way less impacted by structural bias while still reducing crime. A randomized control trial is proposed to test this illustrated case study, and further directions explored. |
| Druh dokumentu: | Article |
| Popis súboru: | application/pdf; application/zip; text/xml |
| Jazyk: | English |
| ISSN: | 1433-3058 0941-0643 |
| DOI: | 10.1007/s00521-025-11478-x |
| Prístupová URL adresa: | https://www.repository.cam.ac.uk/handle/1810/388929 https://doi.org/10.1007/s00521-025-11478-x |
| Rights: | CC BY |
| Prístupové číslo: | edsair.doi.dedup.....c90c8b91b31ac0d2e434a1f34688f500 |
| Databáza: | OpenAIRE |
| Abstrakt: | This investigation explores whether machine learning can predict recidivism while addressing societal biases. To investigate this, we obtained conviction data from the UK’s Police National Computer (PNC) on 346,685 records between January 1, 2000, and February 3, 2006 (His Majesty’s Inspectorate of Constabulary in Use of the Police National Computer: An inspection of the ACRO Criminal Records Office. His Majesty’s Inspectorate of Constabulary, Birmingham, https://assets-hmicfrs.justiceinspectorates.gov.uk/uploads/police-national-computer-use-acro-criminal-records-office.pdf, 2017). We generate twelve machine learning models—six to forecast general recidivism, and six to forecast violent recidivism—over a 3-year period, evaluated via fivefold cross-validation. Our best-performing models outperform the existing state-of-the-arts, receiving an area under curve (AUC) score of 0.8660 and 0.8375 for general and violent recidivism, respectively. Next, we construct a fairness scale that communicates the semantic and technical trade-offs associated with debiasing a criminal justice forecasting model. We use this scale to debias our best-performing models. Results indicate both models can achieve all five fairness definitions because the metrics measuring these definitions—the statistical range of recall, precision, positive rate, and error balance between demographics—indicate that these scores are within a one percentage point difference of each other. Deployment recommendations and implications are discussed. These include recommended safeguards against false positives, an explication of how these models addressed societal biases, and a case study illustrating how these models can improve existing criminal justice practices. That is, these models may help police identify fewer people in a way less impacted by structural bias while still reducing crime. A randomized control trial is proposed to test this illustrated case study, and further directions explored. |
|---|---|
| ISSN: | 14333058 09410643 |
| DOI: | 10.1007/s00521-025-11478-x |
Full Text Finder
Nájsť tento článok vo Web of Science