"Interpretable machine learning models to predict hospital patient read" by Xiaoquan GAO, Sabriya ALAM et al.
 

Publication Type

Journal Article

Version

publishedVersion

Publication Date

6-2023

Abstract

Background: Advanced machine learning models have received wide attention in assisting medical decision making due to the greater accuracy they can achieve. However, their limited interpretability imposes barriers for practitioners to adopt them. Recent advancements in interpretable machine learning tools allow us to look inside the black box of advanced prediction methods to extract interpretable models while maintaining similar prediction accuracy, but few studies have investigated the specific hospital readmission prediction problem with this spirit. Methods: Our goal is to develop a machine-learning (ML) algorithm that can predict 30- and 90- day hospital readmissions as accurately as black box algorithms while providing medically interpretable insights into readmission risk factors. Leveraging a state-of-art interpretable ML model, we use a two-step Extracted Regression Tree approach to achieve this goal. In the first step, we train a black box prediction algorithm. In the second step, we extract a regression tree from the output of the black box algorithm that allows direct interpretation of medically relevant risk factors. We use data from a large teaching hospital in Asia to learn the ML model and verify our two-step approach. Results: The two-step method can obtain similar prediction performance as the best black box model, such as Neural Networks, measured by three metrics: accuracy, the Area Under the Curve (AUC) and the Area Under the Precision-Recall Curve (AUPRC), while maintaining interpretability. Further, to examine whether the prediction results match the known medical insights (i.e., the model is truly interpretable and produces reasonable results), we show that key readmission risk factors extracted by the two-step approach are consistent with those found in the medical literature. Conclusions: The proposed two-step approach yields meaningful prediction results that are both accurate and interpretable. This study suggests a viable means to improve the trust of machine learning based models in clinical practice for predicting readmissions through the two-step approach.

Keywords

Hospital readmission, Interpretable machine learning, Risk prediction, Administrative data, Risk factors

Discipline

Health and Medical Administration | Operations and Supply Chain Management

Research Areas

Operations Management

Publication

BMC Medical Informatics and Decision Making

Volume

23

Issue

1

First Page

1

Last Page

11

ISSN

1472-6947

Identifier

10.1186/s12911-023-02193-5

Publisher

BioMed Central

Copyright Owner and License

Authors-CC-BY

Additional URL

https://doi.org/10.1186/s12911-023-02193-5

Plum Print visual indicator of research metrics
PlumX Metrics
  • Citations
    • Citation Indexes: 8
  • Usage
    • Downloads: 57
    • Abstract Views: 2
  • Captures
    • Readers: 31
  • Mentions
    • News Mentions: 1
see details

Share

COinS