2024-04-15 カリフォルニア大学サンディエゴ校(UCSD)
<関連情報>
- https://today.ucsd.edu/story/study-reveals-ai-enhances-physician-patient-communication
- https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2817615
AIが作成した回答案が健康記録と医師の電子コミュニケーションに統合される AI-Generated Draft Replies Integrated Into Health Records and Physicians’ Electronic Communication
Ming Tai-Seale, PhD, MPH; Sally L. Baxter, MD, MSc; Florin Vaida, PhD; et al
JAMA Network Open Published:April 15, 2024
DOI:10.1001/jamanetworkopen.2024.6565
Key Points
Question Would access to generative artificial intelligence–drafted replies correlate with decreased physician time on reading and replying to patient messages, alongside an increase in reply length?
Findings In this quality improvement study including 122 physicians, generative AI-drafted replies correlated with increased message read time, no change in reply time, and significantly longer replies. Physicians valued AI-generated drafts as a compassionate starting point for their replies and also noted areas for improvement.
Meaning These findings suggest generative AI was not associated with reduced time on writing a reply but was associated with longer read time, longer replies, and perceived value in making a more compassionate reply.
Abstract
Importance Timely tests are warranted to assess the association between generative artificial intelligence (GenAI) use and physicians’ work efforts.
Objective To investigate the association between GenAI-drafted replies for patient messages and physician time spent on answering messages and the length of replies.
Design, Setting, and Participants Randomized waiting list quality improvement (QI) study from June to August 2023 in an academic health system. Primary care physicians were randomized to an immediate activation group and a delayed activation group. Data were analyzed from August to November 2023.
Exposure Access to GenAI-drafted replies for patient messages.
Main Outcomes and Measures Time spent (1) reading messages, (2) replying to messages, (3) length of replies, and (4) physician likelihood to recommend GenAI drafts. The a priori hypothesis was that GenAI drafts would be associated with less physician time spent reading and replying to messages. A mixed-effects model was used.
Results Fifty-two physicians participated in this QI study, with 25 randomized to the immediate activation group and 27 randomized to the delayed activation group. A contemporary control group included 70 physicians. There were 18 female participants (72.0%) in the immediate group and 17 female participants (63.0%) in the delayed group; the median age range was 35-44 years in the immediate group and 45-54 years in the delayed group. The median (IQR) time spent reading messages in the immediate group was 26 (11-69) seconds at baseline, 31 (15-70) seconds 3 weeks after entry to the intervention, and 31 (14-70) seconds 6 weeks after entry. The delayed group’s median (IQR) read time was 25 (10-67) seconds at baseline, 29 (11-77) seconds during the 3-week waiting period, and 32 (15-72) seconds 3 weeks after entry to the intervention. The contemporary control group’s median (IQR) read times were 21 (9-54), 22 (9-63), and 23 (9-60) seconds in corresponding periods. The estimated association of GenAI was a 21.8% increase in read time (95% CI, 5.2% to 41.0%; P = .008), a -5.9% change in reply time (95% CI, -16.6% to 6.2%; P = .33), and a 17.9% increase in reply length (95% CI, 10.1% to 26.2%; P < .001). Participants recognized GenAI’s value and suggested areas for improvement.
Conclusions and Relevance In this QI study, GenAI-drafted replies were associated with significantly increased read time, no change in reply time, significantly increased reply length, and some perceived benefits. Rigorous empirical tests are necessary to further examine GenAI’s performance. Future studies should examine patient experience and compare multiple GenAIs, including those with medical training.