Publication Type

Journal Article

Version

acceptedVersion

Publication Date

6-2019

Abstract

Developments in data analytics, computational power and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions—social welfare, law enforcement and, most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals’ rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to under-represented populations, and the broader legal, social and ethical ramifications. State v Loomis, a recent case in the USA, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this article unpacks the issues of the ‘legal black box’ and the ‘technical black box’ to identify the risks posed by rampant ‘algorithmization’ of government functions to due process, equal protection and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.

Discipline

Dispute Resolution and Arbitration | Science and Technology Law

Research Areas

Asian and Comparative Legal Systems

Publication

International Journal of Law and Information Technology

Volume

27

Issue

2

First Page

122

Last Page

141

ISSN

0967-0769

Identifier

10.1093/ijlit/eaz001

Publisher

Oxford University Press

Additional URL

https://doi.org/10.1093/ijlit/eaz001

Share

COinS