Publication Type
Journal Article
Version
publishedVersion
Publication Date
10-2023
Abstract
Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In this work, we utilized salient feature explanations along with what-if, counterfactual explanations to make humans review AI suggestions more analytically to reduce overreliance on AI and explored the effect of these explanations on trust and reliance on AI during clinical decision-making. We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion, and analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations. Our results showed that the AI model with both salient features and counterfactual explanations assisted therapists and laypersons to improve their performance and agreement level on the task when 'right' AI outputs are presented. While both therapists and laypersons over-relied on 'wrong' AI outputs, counterfactual explanations assisted both therapists and laypersons to reduce their over-reliance on 'wrong' AI outputs by 21% compared to salient feature explanations. Specifically, laypersons had higher performance degrades by 18.0 f1-score with salient feature explanations and 14.0 f1-score with counterfactual explanations than therapists with performance degrades of 8.6 and 2.8 f1-scores respectively. Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on 'wrong' AI outputs and implications for improving human-AI collaborative decision-making.
Keywords
clinical decision support systems, explainable AI, human centered AI, human-AI collaboration, physical stroke rehabilitation assessment, reliance, trust
Discipline
Artificial Intelligence and Robotics | Health Information Technology
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the ACM on Human-Computer Interaction
Volume
7
First Page
1
Last Page
22
ISSN
2573-0142
Identifier
10.1145/3610218
Publisher
Association for Computing Machinery (ACM)
Citation
LEE, Min Hun and CHEW, Chong Jun.
Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making. (2023). Proceedings of the ACM on Human-Computer Interaction. 7, 1-22.
Available at: https://ink.library.smu.edu.sg/sis_research/8274
Copyright Owner and License
Authors-CC-BY
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.
Additional URL
https://doi.org/10.1145/3610218