Publication Type
Journal Article
Version
acceptedVersion
Publication Date
11-2024
Abstract
Despite the importance of discussions over the epistemic role that artificially intelligent decision support systems ought to play, there is currently a lack of these discussions in both the AI literature and the epistemology literature. My goal in this paper is to rectify this by proposing an account of the epistemic role of AI decision support systems in medicine and discussing what this epistemic role means with regard to how these systems ought to be utilized. In particular, I argue that AI decision support systems are not epistemic superiors, inferiors, or peers. Instead, I recommend that they be classified in an epistemic category of their own, as “epistemic nudges.” With my proposed account of the epistemic role of AI decision support systems, I aim to provide answers the following two questions: (1) How ought disagreements between a clinician and an AI decision support system be handled? (2) How ought responsibility and accountability be allocated when an AI decision support system used by a human clinician in decision-making results in harm?
Keywords
Epistemic role, Artificial intelligence, Decision support systems, Disagreement, Healthcare
Discipline
Artificial Intelligence and Robotics | Epistemology
Research Areas
Humanities
Areas of Excellence
Digital transformation
Publication
Philosophy and Technology
Volume
37
First Page
1
Last Page
20
ISSN
2210-5433
Identifier
10.1007/s13347-024-00819-8
Publisher
Springer
Citation
HIRMIZ, Rand.(2024). The epistemic role of AI decision support systems: Neither superiors, nor inferiors, nor peers. Philosophy and Technology, 37, 1-20.
Available at: https://ink.library.smu.edu.sg/soss_research/4266
Copyright Owner and License
Authors
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1007/s13347-024-00819-8