AIが医療従事者の混乱を救う脳波判読を支援(AI Helps Medical Professionals Read Confusing EEGs to Save Lives)

ad

2024-05-29 デューク大学(Duke)

デューク大学の研究者たちは、集中治療患者の脳波(EEG)を医療専門家が読み取る能力を大幅に向上させる支援型機械学習モデルを開発しました。EEGは無意識の患者が発作や発作様のイベントを経験しているかを知る唯一の方法であり、このツールは毎年多くの命を救う可能性があります。研究では2,700人以上の患者からのEEGサンプルを使用し、120人以上の専門家が特徴を選び出し、発作や発作様イベントとして分類しました。開発されたモデルは、視覚的にEEGパターンを分類し、判断に使われた脳波のパターンや類似の専門的診断チャートを示します。このモデルを使った実験では、医療専門家のEEG分類精度が47%から71%に向上しました。この研究は、解釈可能な機械学習モデルが「ブラックボックス」モデルよりも正確で有用であることを示しています。

<関連情報>

解釈可能な機械学習を用いて、直腸障害-間歇性障害の連続における脳波パターンを分類する際の臨床医のパフォーマンスを向上させる Improving Clinician Performance in Classifying EEG Patterns on the Ictal–Interictal Injury Continuum Using Interpretable Machine Learning

Alina Jade Barnett, Ph.D.; Zhicheng Guo; Jin Jing, Ph.D.; Wendong Ge, Ph.D.; Peter W. Kaplan, M.B.B.S.; Wan Yee Kong, M.D.; Ioannis Karakis, M.D., Ph.D.; +7, and M. Brandon Westover, M.D., Ph.D.
NEJM AI  Published: May 23, 2024
DOI: 10.1056/AIoa2300331

Abstract

BACKGROUND
In intensive care units (ICUs), critically ill patients are monitored with electroencephalography (EEG) to prevent serious brain injury. EEG monitoring is constrained by clinician availability, and EEG interpretation can be subjective and prone to interobserver variability. Automated deep-learning systems for EEG could reduce human bias and accelerate the diagnostic process. However, existing uninterpretable (black-box) deep-learning models are untrustworthy, difficult to troubleshoot, and lack accountability in real-world applications, leading to a lack of both trust and adoption by clinicians.

METHODS
We developed an interpretable deep-learning system that accurately classifies six patterns of potentially harmful EEG activity — seizure, lateralized periodic discharges (LPDs), generalized periodic discharges (GPDs), lateralized rhythmic delta activity (LRDA), generalized rhythmic delta activity (GRDA), and other patterns — while providing faithful case-based explanations of its predictions. The model was trained on 50,697 total 50-second continuous EEG samples collected from 2711 patients in the ICU between July 2006 and March 2020 at Massachusetts General Hospital. EEG samples were labeled as one of the six EEG patterns by 124 domain experts and trained annotators. To evaluate the model, we asked eight medical professionals with relevant backgrounds to classify 100 EEG samples into the six pattern categories — once with and once without artificial intelligence (AI) assistance — and we assessed the assistive power of this interpretable system by comparing the diagnostic accuracy of the two methods. The model’s discriminatory performance was evaluated with area under the receiver-operating characteristic curve (AUROC) and area under the precision–recall curve. The model’s interpretability was measured with task-specific neighborhood agreement statistics that interrogated the similarities of samples and features. In a separate analysis, the latent space of the neural network was visualized by using dimension reduction techniques to examine whether the ictal–interictal injury continuum hypothesis, which asserts that seizures and seizure-like patterns of brain activity lie along a spectrum, is supported by data.

RESULTS
The performance of all users significantly improved when provided with AI assistance. Mean user diagnostic accuracy improved from 47 to 71% (P<0.04). The model achieved AUROCs of 0.87, 0.93, 0.96, 0.92, 0.93, and 0.80 for the classes seizure, LPD, GPD, LRDA, GRDA, and other patterns, respectively. This performance was significantly higher than that of a corresponding uninterpretable black-box model (with P<0.0001). Videos traversing the ictal–interictal injury manifold from dimension reduction (a two-dimensional representation of the original high-dimensional feature space) give insight into the layout of EEG patterns within the network’s latent space and illuminate relationships between EEG patterns that were previously hypothesized but had not yet been shown explicitly. These results indicate that the ictal–interictal injury continuum hypothesis is supported by data.

CONCLUSIONS
Users showed significant pattern classification accuracy improvement with the assistance of this interpretable deep-learning model. The interpretable design facilitates effective human–AI collaboration; this system may improve diagnosis and patient care in clinical settings. The model may also provide a better understanding of how EEG patterns relate to each other along the ictal–interictal injury continuum. (Funded by the National Science Foundation, National Institutes of Health, and others.)

ad
医療・健康
ad
ad


Follow
ad
タイトルとURLをコピーしました