AIがICUの病床管理を最適化 (AI Can Open Up Beds in the ICU)

ad

2025-03-05 テキサス大学オースチン校 (UT Austin)

AIがICUの病床管理を最適化 (AI Can Open Up Beds in the ICU)

テキサス大学オースティン校マクコームズ・スクール・オブ・ビジネスのインドラニル・バーダン教授らの研究チームは、人工知能(AI)を活用して集中治療室(ICU)のベッド管理を最適化する新たなモデルを開発しました。 このモデルは、患者の年齢、性別、バイタルサイン、投薬内容、診断結果など47の属性を基に、ICUでの滞在期間を予測します。さらに、各属性が予測結果にどのように影響するかを視覚的に示すことで、医師がAIの判断を理解しやすくしています。この「説明可能なAI(XAI)」アプローチにより、医療従事者は患者の入退院スケジュールをより効果的に計画し、スタッフの配置やリソース管理を改善することが可能となります。

<関連情報>

集中治療室の在院日数を予測するためのグラフ学習を用いた説明可能な人工知能アプローチ An Explainable Artificial Intelligence Approach Using Graph Learning to Predict Intensive Care Unit Length of Stay

Tianjian Guo ,Indranil R. Bardhan ,Ying Ding ,Shichang Zhang
Information Systems Research  Published:11 Dec 2024
DOI:https://doi.org/10.1287/isre.2023.0029

Abstract

Intensive care units (ICUs) are critical for treating severe health conditions but represent significant hospital expenditures. Accurate prediction of ICU length of stay (LoS) can enhance hospital resource management, reduce readmissions, and improve patient care. In recent years, widespread adoption of electronic health records and advancements in artificial intelligence (AI) have facilitated accurate predictions of ICU LoS. However, there is a notable gap in the literature on explainable artificial intelligence (XAI) methods that identify interactions between model input features to predict patient health outcomes. This gap is especially noteworthy as the medical literature suggests that complex interactions between clinical features are likely to significantly impact patient health outcomes. We propose a novel graph learning-based approach that offers state-of-the-art prediction and greater interpretability for ICU LoS prediction. Specifically, our graph-based XAI model can generate interaction-based explanations supported by evidence-based medicine, which provide rich patient-level insights compared with existing XAI methods. We test the statistical significance of our XAI approach using a distance-based separation index and utilize perturbation analyses to examine the sensitivity of our model explanations to changes in input features. Finally, we validate the explanations of our graph learning model using the conceptual evaluation property (Co-12) framework and a small-scale user study of ICU clinicians. Our approach offers interpretable predictions of ICU LoS grounded in design science research, which can facilitate greater integration of AI-enabled decision support systems in clinical workflows, thereby enabling clinicians to derive greater value.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました