Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2025

Abstract

Despite the growing promise of artificial intelligence (AI) in supporting decision-making across domains, fostering appropriate human reliance on AI remains a critical challenge. In this paper, we investigate the utility of exploring distance-based uncertainty scores for task delegation to AI and describe how these scores can be visualized through embedding representations for human-AI decision-making. After developing an AI-based system for physical stroke rehabilitation assessment, we conducted a study with 19 health professionals and 10 students in medicine/health to understand the effect of exploring distance-based uncertainty scores on users’ reliance on AI. Our findings showed that distance-based uncertainty scores outperformed traditional probability-based uncertainty scores in identifying uncertain cases. In addition, after exploring confidence scores for task delegation and reviewing embedding-based visualizations of distance-based uncertainty scores, participants achieved an 8.20% higher rate of correct decisions, a 7.15% higher rate of changing their decisions to correct ones, and a 7.14% lower rate of incorrect changes after reviewing AI outputs than those reviewing probability-based uncertainty scores (p

Keywords

clinical decision support systems, explainable AI, human centered AI, human-AI collaboration, physical stroke rehabilitation, reliance, trust, trustworthy AI, uncertainty quantification

Discipline

Artificial Intelligence and Robotics

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Sustainability

Publication

FAccT '25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, Athens, Greece, June 23-26

First Page

2274

Last Page

2289

ISBN

9798400714825

Identifier

10.1145/3715275.3732155

Publisher

ACM

City or Country

New York

Additional URL

https://doi.org/10.1145/3715275.3732155

Share

COinS