2024-10-21 カリフォルニア大学サンディエゴ校(UCSD)
New pilot study examines AI tools to streamline reporting processes in a hospital setting that could enhance health care delivery and improve access to quality data. Photo credit: Getty Images
<関連情報>
- https://today.ucsd.edu/story/study-ai-could-transform-how-hospitals-produce-quality-reports
- https://ai.nejm.org/doi/10.1056/AIcs2400420
大規模な言語モデルにより、病院の品質指標の報告を効率化 Large Language Models for More Efficient Reporting of Hospital Quality Measures
Aaron Boussina, Ph.D., Rishivardhan Krishnamoorthy, M.S., Kimberly Quintero, R.N., M.S., Shreyansh Joshi, Gabriel Wardi, M.D., Hayden Pour, M.S., Nicholas Hilbert, R.N., M.S.N
NEJM AI Published October 21, 2024
DOI: 10.1056/AIcs2400420
Abstract
Hospital quality measures are a vital component of a learning health system, yet they can be costly to report, statistically underpowered, and inconsistent due to poor interrater reliability. Large language models (LLMs) have recently demonstrated impressive performance on health care–related tasks and offer a promising way to provide accurate abstraction of complete charts at scale. To evaluate this approach, we deployed an LLM-based system that ingests Fast Healthcare Interoperability Resources data and outputs a completed Severe Sepsis and Septic Shock Management Bundle (SEP-1) abstraction. We tested the system on a sample of 100 manual SEP-1 abstractions that University of California San Diego Health reported to the Centers for Medicare & Medicaid Services in 2022. The LLM system achieved agreement with manual abstractors on the measure category assignment in 90 of the abstractions (90%; κ=0.82; 95% confidence interval, 0.71 to 0.92). Expert review of the 10 discordant cases identified four that were mistakes introduced by manual abstraction. This pilot study suggests that LLMs using interoperable electronic health record data may perform accurate abstractions for complex quality measures. (Funded by the National Institute of Allergy and Infectious Diseases [1R42AI177108-1] and others.)
.