Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
8-2025
Abstract
Effective risk assessment is paramount for responsible generative AI (GenAI) deployment. Traditional governance approaches that rely on manual reviews are inadequate given the scale and velocity of GenAI outputs. A risk-based approach incorporating real-time monitoring and governance is paramount. In this research, we examine how the efficacy of suggestive versus supportive explanations for AI’s risk assessment of GenAI outputs is moderated by user domain expertise and AI’s risk assessment in determining user acceptance. We hypothesize that cognitive involvement increases with AI’s risk assessment, with higher risks triggering more critical evaluation. By drawing on the elaboration likelihood model, we hypothesize that supportive explanations have a greater effect on experts and suggestive explanations have a greater effect on novices. We also hypothesize that as AI’s assessed risk increases, the reliance of experts and novices on supportive explanations increases. This research provides insight into the efficacy of explanation style for AI governance systems.
Keywords
Generative AI governance, Risk assessment, Elaboration likelihood model, User domain expertise
Discipline
Artificial Intelligence and Robotics
Research Areas
Information Systems and Management
Areas of Excellence
Digital transformation
Publication
Proceedings of the 31st Americas Conference on Information Systems (AMCIS 2025), Montreal, Canada, August 14-16
First Page
1
Last Page
5
Publisher
AIS
City or Country
United States of America
Citation
YOUNG, Wu Jiaqi and NAH, Fiona Fui-hoon.
AI-assisted risk assessment in generative AI governance. (2025). Proceedings of the 31st Americas Conference on Information Systems (AMCIS 2025), Montreal, Canada, August 14-16. 1-5.
Available at: https://ink.library.smu.edu.sg/sis_research/10859
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://aisel.aisnet.org/amcis2025/intelfuture/intelfuture/48/