Publication Type

Journal Article

Version

publishedVersion

Publication Date

2-2023

Abstract

It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the paper contends that AI systems are better conceptualised instead as situational characters whose actions remain constrained by their programming. Properly viewing AI systems as such illuminates how existing legal doctrines could be sensibly applied to AI and reinforces emerging calls for placing greater scrutiny on the broader AI ecosystem.

Keywords

artificial intelligence, autonomous systems, attribution theory, law and technology, law and psychology

Discipline

Artificial Intelligence and Robotics | Public Law and Legal Theory | Science and Technology Law

Research Areas

Innovation, Technology and the Law

Publication

Legal Studies

Volume

43

Issue

4

First Page

583

Last Page

602

ISSN

0261-3875

Identifier

10.1017/lst.2022.52

Publisher

Cambridge University Press

Copyright Owner and License

Authors

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Additional URL

https://doi.org/10.1017/lst.2022.52

Share

COinS