Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
8-2018
Abstract
Artificial Intelligence (AI) is able to perform at humans and even surpass human’s performances in some tasks. Recent cases about self-driving cars, cashier-free supermarket Amazon Go, and virtual assistants such as Apple’s Siri and Google Assistant have illustrated the current and future potential of AI. AI and its applications have infiltrated human’s work and daily life. It is inevitable that humans need to build a working relationship with AI and its applications. On one hand, humans can benefit from this new technology, for instance, a home robot can release housewife from mundane and monotonous tasks (Siau 2017, Siau 2018). On the other hand, the potential threats pose by AI and the possible social upheavals should not be overlooked. The fatal crash of self-driving cars, the data breach of famous websites, and the potential unemployment of cab and truck drivers are hindering human’s trust and acceptance of this new technology. A study conducted by HSBC shows that only 8% of the participants would trust a machine offering mortgage advice compared to 41% trusting a mortgage broker. Trust plays an important role in directing human behaviors, including the acceptance of a person or an object. Research has shown that trust is crucial in organizational relationships, e-commerce, online environment, and human-technology interactions (Siau and Wang 2018). AI is a cutting-edge technology and a powerful technology that can take over human tasks. In the healthcare field, AI is integrating with other technologies, such as robots, to help with diagnoses and to perform surgery. However, many doctors are still skeptical about technologies (Siau and Shen 2006), such as AI assistants, and most patients prefer to trust a human doctor than a robot doctor. Further, many aspects of machine learning, such as deep learning, are still “black-boxes” and the lack of explainability and transparency is not helping in the trust building process. Trust is the cornerstone of effective doctor-patient relationships. This research studies the following questions: What factors affect trust building with AI applications in the healthcare field? How to improve human’s trust in AI healthcare applications? Trust building is a dynamic process, involving movement from initial trust building to continuous trust. Image design, predictability, usability, and privacy are important factors that affect trust building (Siau and Wang 2018). Thus, this research will also study the impact of these factors in different trust-building stages in healtcare. Further, in the healthcare field, the trust in AI comes from two groups -- the physicians/doctors, and the users/patients. This study will focus on trust building from the users’ perspective. An experimental study will be conducted. The participants will be recruited from medical schools and hospitals in the US and other countries. We aim to develop a trust-building model that could be utilized in designing Healthcare AI applications.
Discipline
Artificial Intelligence and Robotics | Health Information Technology
Research Areas
Information Systems and Management
Areas of Excellence
Digital transformation
Publication
Proceedings of the 24th Americas Conference on Information Systems (AMCIS 2018), New Orleans, LA, August 16-18
First Page
1
Last Page
1
ISBN
9780996683166
Publisher
AMCIS
City or Country
New Orleans, LA
Citation
WANG, W. and SIAU, Keng.
Trusting artificial intelligence in healthcare. (2018). Proceedings of the 24th Americas Conference on Information Systems (AMCIS 2018), New Orleans, LA, August 16-18. 1-1.
Available at: https://ink.library.smu.edu.sg/sis_research/9403
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.