Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

1-2026

Abstract

Although Artificial Intelligence (AI) systems are playing an increasing role in critical domains such as healthcare, finance, and autonomous systems, their decision-making processes remain largely opaque. This paper examines the challenges of AI transparency, addressing the “black box” problem using Explainable AI (XAI) techniques such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). It also examines the ethical, regulatory, and societal implications of AI opacity and proposes a Comprehensive AI Observability (CAO) Framework that integrates deep explainability, provenance tracking, and real-time monitoring to enhance AI accountability. By bridging technical solutions with governance structures, this research emphasizes the necessity for adaptive, transparent AI-based solutions that align with ethical norms and expectations. The findings underscore the importance of interdisciplinary collaboration in making AI decisions interpretable, ensuring trust, fairness, and responsible deployment in practical applications.

Keywords

Explainable AI, Trust and Ethics in AI, Human-AI Collaboration, Data Secu-rity and Provenance, AI in Healthcare and Decision-Making

Discipline

Artificial Intelligence and Robotics

Research Areas

Information Systems and Management

Areas of Excellence

Digital transformation

Publication

Proceedings of the 27th International Conference on Human-Computer Interaction, HCII 2025, Gothenburg, Sweden, June 22-27

Volume

16343

First Page

78

Last Page

93

ISBN

9783032131669

Identifier

10.1007/978-3-032-13167-6_6

Publisher

Springer

City or Country

Cham

Additional URL

https://doi.org/10.1007/978-3-032-13167-6_6

Share

COinS