Publication Type

Journal Article

Version

publishedVersion

Publication Date

12-2022

Abstract

Big data and algorithmic risk prediction tools promise to improve criminal justice systems by reducing human biases and inconsistencies in decision-making. Yet different, equally justifiable choices when developing, testing and deploying these socio-technical tools can lead to disparate predicted risk scores for the same individual. Synthesising diverse perspectives from machine learning, statistics, sociology, criminology, law, philosophy and economics, we conceptualise this phenomenon as predictive inconsistency. We describe sources of predictive inconsistency at different stages of algorithmic risk assessment tool development and deployment and consider how future technological developments may amplify predictive inconsistency. We argue, however, that in a diverse and pluralistic society we should not expect to completely eliminate predictive inconsistency. Instead, to bolster the legal, political and scientific legitimacy of algorithmic risk prediction tools, we propose identifying and documenting relevant and reasonable ‘forking paths’ to enable quantifiable, reproducible multiverse and specification curve analyses of predictive inconsistency at the individual level.

Keywords

algorithmic risk prediction, criminal justice, forking paths, multiverse analysis, pluralism, predictive inconsistency, specification curve analysis

Discipline

Criminal Law | Theory and Algorithms

Research Areas

Asian and Comparative Legal Systems

Publication

Journal of the Royal Statistical Society: Statistics in Society Series A

Volume

185

Issue

2

First Page

S692

Last Page

S723

ISSN

0964-1998

Identifier

10.1111/rssa.12966

Publisher

Royal Statistical Society

Additional URL

https://doi.org/10.1111/rssa.12966

Share

COinS