AIによるアルツハイマーリスク判定支援(Would You Want to Know if Alzheimer’s Were in Your Future?)

ad

2025-06-25 コロンビア大学

コロンビア大学の研究者は、AIを用いて日常会話からアルツハイマー病の早期兆候を検出する技術を開発した。看護師との会話音声からミリ秒単位の変化を分析し、最大10年前の認知症リスクを予測可能。30秒の音声でも90%の精度で軽度認知障害を識別できる。非侵襲的で日常的に活用できる利点がある一方、精神的影響や過剰診療、プライバシーへの懸念など倫理的課題も指摘されている。現在は臨床試験中。

<関連情報>

電子カルテデータを超えて:自然言語処理と機械学習を活用して、患者と看護師の言語コミュニケーションから認知的洞察を明らかにする Beyond electronic health record data: leveraging natural language processing and machine learning to uncover cognitive insights from patient-nurse verbal communications

Maryam Zolnoori, PhD , Ali Zolnour, MS , Sasha Vergez, MS , Sridevi Sridharan, MS , Ian Spens, MS , Maxim Topaz, PhD , James M Noble, MD , Suzanne Bakken, PhD , Julia Hirschberg, PhD , Kathryn Bowles, PhD …
Journal of the American Medical Informatics Association  Published:12 December 2024
DOI:https://doi.org/10.1093/jamia/ocae300

Abstract

Background

Mild cognitive impairment and early-stage dementia significantly impact healthcare utilization and costs, yet more than half of affected patients remain underdiagnosed. This study leverages audio-recorded patient-nurse verbal communication in home healthcare settings to develop an artificial intelligence-based screening tool for early detection of cognitive decline.

Objective

To develop a speech processing algorithm using routine patient-nurse verbal communication and evaluate its performance when combined with electronic health record (EHR) data in detecting early signs of cognitive decline.

Method

We analyzed 125 audio-recorded patient-nurse verbal communication for 47 patients from a major home healthcare agency in New York City. Out of 47 patients, 19 experienced symptoms associated with the onset of cognitive decline. A natural language processing algorithm was developed to extract domain-specific linguistic and interaction features from these recordings. The algorithm’s performance was compared against EHR-based screening methods. Both standalone and combined data approaches were assessed using F1-score and area under the curve (AUC) metrics.

Results

The initial model using only patient-nurse verbal communication achieved an F1-score of 85 and an AUC of 86.47. The model based on EHR data achieved an F1-score of 75.56 and an AUC of 79. Combining patient-nurse verbal communication with EHR data yielded the highest performance, with an F1-score of 88.89 and an AUC of 90.23. Key linguistic indicators of cognitive decline included reduced linguistic diversity, grammatical challenges, repetition, and altered speech patterns. Incorporating audio data significantly enhanced the risk prediction models for hospitalization and emergency department visits.

Discussion

Routine verbal communication between patients and nurses contains critical linguistic and interactional indicators for identifying cognitive impairment. Integrating audio-recorded patient-nurse communication with EHR data provides a more comprehensive and accurate method for early detection of cognitive decline, potentially improving patient outcomes through timely interventions. This combined approach could revolutionize cognitive impairment screening in home healthcare settings.

 

ADscreen: アルツハイマー病および関連する認知症患者の自動識別のための音声処理ベースのスクリーニングシステム ADscreen: A speech processing-based screening system for automatic identification of patients with Alzheimer’s disease and related dementia

Maryam Zolnoori, Ali Zolnour, Maxim Topaz
Artificial Intelligence in Medicine  Available online: 17 July 2023
DOI:https://doi.org/10.1016/j.artmed.2023.102624

Graphical abstract

AIによるアルツハイマーリスク判定支援(Would You Want to Know if Alzheimer’s Were in Your Future?)

Abstract

Alzheimer’s disease and related dementias (ADRD) present a looming public health crisis, affecting roughly 5 million people and 11 % of older adults in the United States. Despite nationwide efforts for timely diagnosis of patients with ADRD, >50 % of them are not diagnosed and unaware of their disease. To address this challenge, we developed ADscreen, an innovative speech-processing based ADRD screening algorithm for the protective identification of patients with ADRD. ADscreen consists of five major components: (i) noise reduction for reducing background noises from the audio-recorded patient speech, (ii) modeling the patient’s ability in phonetic motor planning using acoustic parameters of the patient’s voice, (iii) modeling the patient’s ability in semantic and syntactic levels of language organization using linguistic parameters of the patient speech, (iv) extracting vocal and semantic psycholinguistic cues from the patient speech, and (v) building and evaluating the screening algorithm. To identify important speech parameters (features) associated with ADRD, we used the Joint Mutual Information Maximization (JMIM), an effective feature selection method for high dimensional, small sample size datasets. Modeling the relationship between speech parameters and the outcome variable (presence/absence of ADRD) was conducted using three different machine learning (ML) architectures with the capability of joining informative acoustic and linguistic with contextual word embedding vectors obtained from the DistilBERT (Bidirectional Encoder Representations from Transformers). We evaluated the performance of the ADscreen on an audio-recorded patients’ speech (verbal description) for the Cookie-Theft picture description task, which is publicly available in the dementia databank. The joint fusion of acoustic and linguistic parameters with contextual word embedding vectors of DistilBERT achieved F1-score = 84.64 (standard deviation [std] = ±3.58) and AUC-ROC = 92.53 (std = ±3.34) for training dataset, and F1-score = 89.55 and AUC-ROC = 93.89 for the test dataset. In summary, ADscreen has a strong potential to be integrated with clinical workflow to address the need for an ADRD screening tool so that patients with cognitive impairment can receive appropriate and timely care.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました