2025-06-23 イェール大学
A clinician performing an echocardiogram of a patient’s heart
<関連情報>
- https://medicine.yale.edu/internal-medicine/news-article/ai-tool-interprets-echocardiograms-in-minutes/
- https://jamanetwork.com/journals/jama/article-abstract/2835630
- https://academic.oup.com/eurheartj/article/44/43/4592/7248551?login=false
マルチタスク深層学習を用いたAIによる心エコー図の完全解釈 Complete AI-Enabled Echocardiography Interpretation With Multitask Deep Learning
Gregory Holste, MSE; Evangelos K. Oikonomou, MD, DPhil; Márton Tokodi, MD, PhD; et al
JAMA Published:June 23, 2025
DOI:10.1001/jama.2025.8731
Key Points
Question Can artificial intelligence (AI) fully automate echocardiogram interpretation?
Findings This study reports the development and validation of an automated AI system for echocardiogram analysis, PanEcho, that performed 18 diagnostic classification tasks with a median area under the receiver operating characteristic curve of 0.91 and 21 echocardiographic parameter estimation tasks with a median normalized mean absolute error of 0.13.
Meaning An AI system can automate complete echocardiogram interpretation with high accuracy, potentially accelerating workflows and enabling rapid cardiovascular health screening in point-of-care settings with limited access to trained experts.
Abstract
Importance Echocardiography is a cornerstone of cardiovascular care, but relies on expert interpretation and manual reporting from a series of videos. An artificial intelligence (AI) system, PanEcho, has been proposed to automate echocardiogram interpretation with multitask deep learning.
Objective To develop and evaluate the accuracy of an AI system on a comprehensive set of 39 labels and measurements on transthoracic echocardiography (TTE).
Design, Setting, and Participants This study represents the development and retrospective, multisite validation of an AI system. PanEcho was developed using TTE studies conducted at Yale New Haven Health System (YNHHS) hospitals and clinics from January 2016 to June 2022 during routine care. The model was internally validated in a temporally distinct YNHHS cohort from July to December 2022, externally validated across 4 diverse external cohorts, and publicly released.
Main Outcomes and Measures The primary outcome was the area under the receiver operating characteristic curve (AUC) for diagnostic classification tasks and mean absolute error for parameter estimation tasks, comparing AI predictions with the assessment of the interpreting cardiologist.
Results This study included 1.2 million echocardiographic videos from 32 265 TTE studies of 24 405 patients across YNHHS hospitals and clinics. The AI system performed 18 diagnostic classification tasks with a median (IQR) AUC of 0.91 (0.88-0.93) and estimated 21 echocardiographic parameters with a median (IQR) normalized mean absolute error of 0.13 (0.10-0.18) in internal validation. For instance, the model accurately estimated left ventricular ejection fraction (mean absolute error: 4.2% internal; 4.5% external) and detected moderate or worse left ventricular systolic dysfunction (AUC: 0.98 internal; 0.99 external), right ventricular systolic dysfunction (AUC: 0.93 internal; 0.94 external), and severe aortic stenosis (AUC: 0.98 internal; 1.00 external). The AI system maintained excellent performance in limited imaging protocols, performing 15 diagnosis tasks with a median (IQR) AUC of 0.91 (0.87-0.94) in an abbreviated TTE cohort and 14 tasks with a median (IQR) AUC of 0.85 (0.77-0.87) on real-world point-of-care ultrasonography acquisitions from YNHHS emergency departments.
Conclusions and Relevance In this study, an AI system that automatically interprets echocardiograms maintained high accuracy across geography and time from complete and limited studies. This AI system may be used as an adjunct reader in echocardiography laboratories or AI-enabled screening tool in point-of-care settings following prospective evaluation in the respective clinical workflows.
ディープラーニングを心エコーに応用した重症大動脈弁狭窄症の検出 Severe aortic stenosis detection by deep learning applied to echocardiography
Gregory Holste , Evangelos K Oikonomou , Bobak J Mortazavi , Andreas Coppi , Kamil F Faridi , Edward J Miller , John K Forrest , Robert L McNamara , Lucila Ohno-Machado , Neal Yuan …
European Heart Journal Published:23 August 2023
DOI:https://doi.org/10.1093/eurheartj/ehad456
Abstract
Background and Aims
Early diagnosis of aortic stenosis (AS) is critical to prevent morbidity and mortality but requires skilled examination with Doppler imaging. This study reports the development and validation of a novel deep learning model that relies on two-dimensional (2D) parasternal long axis videos from transthoracic echocardiography without Doppler imaging to identify severe AS, suitable for point-of-care ultrasonography.
Methods and results
In a training set of 5257 studies (17 570 videos) from 2016 to 2020 [Yale-New Haven Hospital (YNHH), Connecticut], an ensemble of three-dimensional convolutional neural networks was developed to detect severe AS, leveraging self-supervised contrastive pretraining for label-efficient model development. This deep learning model was validated in a temporally distinct set of 2040 consecutive studies from 2021 from YNHH as well as two geographically distinct cohorts of 4226 and 3072 studies, from California and other hospitals in New England, respectively. The deep learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.978 (95% CI: 0.966, 0.988) for detecting severe AS in the temporally distinct test set, maintaining its diagnostic performance in geographically distinct cohorts [0.952 AUROC (95% CI: 0.941, 0.963) in California and 0.942 AUROC (95% CI: 0.909, 0.966) in New England]. The model was interpretable with saliency maps identifying the aortic valve, mitral annulus, and left atrium as the predictive regions. Among non-severe AS cases, predicted probabilities were associated with worse quantitative metrics of AS suggesting an association with various stages of AS severity.
Conclusion
This study developed and externally validated an automated approach for severe AS detection using single-view 2D echocardiography, with potential utility for point-of-care screening.


