脳制御型補聴システム、初の人体試験で有効性確認 (Brain-Controlled Hearing System Proves Itself in First Human Studies)

ad

2026-05-11 コロンビア大学

コロンビア大学ザッカーマン研究所の研究チームは、脳活動を利用して聞きたい音を選択できる「脳制御型聴覚システム」を初めて人で実証した。騒がしい環境では、人は特定の会話に集中する「カクテルパーティー効果」を利用しているが、補聴器ではこうした選択的聴取の再現が難しかった。今回の研究では、脳波信号をリアルタイム解析し、利用者が注意を向けている話者をAIが判別。その音声を強調するシステムを構築した。初期の人体試験では、雑音環境下でも目的音声の理解度向上が確認されたという。研究チームは、この技術が次世代補聴器やブレイン・コンピュータ・インターフェース(BCI)開発につながると期待している。また、神経疾患患者のコミュニケーション支援や、より自然な人間―機械インターフェース実現にも応用可能性があるとしている。


A hearing system monitoring this man’s brain activity amplifies a conversation played on his left while quieting one on his right, based on which conversation his brainwaves suggest he is paying attention to. (Vishal Choudhari / Mesgarani lab / Columbia’s Zuckerman Institute)

<関連情報>

リアルタイムの脳制御による選択的聴覚は、複数話者環境における音声知覚を向上させる Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments

Vishal Choudhari,Maximilian Nentwich,Sarah Johnson,Jose L. Herrero,Stephan Bickel,Ashesh D. Mehta,Daniel Friedman,Adeen Flinker,Edward F. Chang & Nima Mesgarani
Nature Neuroscience  Published:11 May 2026
DOI:https://doi.org/10.1038/s41593-026-02281-5

Abstract

Understanding speech in noisy environments is difficult for many people, and current hearing aids often fail because they amplify all sounds rather than the talker of interest. Auditory attention decoding (AAD) offers a potential solution by using the listener’s brain signals to identify and enhance the attended speaker, but it has been unclear whether this can provide real-time perceptual benefits. Here we used high-resolution intracranial electroencephalography in patients undergoing neurosurgical procedures to implement a closed-loop system that achieves the decoding fidelity necessary to dynamically amplify the attended talker. Across multiple experiments, the system improved speech intelligibility, reduced listening effort and was consistently preferred by subjects. It also tracked both instructed and self-initiated attention shifts. By providing direct evidence that a real-time, brain-controlled hearing system can enhance perception, this work establishes a key performance benchmark for future auditory brain–computer interfaces and advances AAD from a theoretical concept to a validated solution for personalized assistive hearing.

 

複数話者音声知覚における注意を向けた話者の選択的皮質表現 Selective cortical representation of attended speaker in multi-talker speech perception

Nima Mesgarani & Edward F. Chang
Nature  Published:18 April 2012
DOI:https://doi.org/10.1038/nature11020

Abstract

Humans possess a remarkable ability to attend to a single speaker’s voice in a multi-talker background1,2,3. How the auditory system manages to extract intelligible speech under such acoustically complex and adverse listening conditions is not known, and, indeed, it is not clear how attended speech is internally represented4,5. Here, using multi-electrode surface recordings from the cortex of subjects engaged in a listening task with two simultaneous speakers, we demonstrate that population responses in non-primary human auditory cortex encode critical features of attended speech: speech spectrograms reconstructed based on cortical responses to the mixture of speakers reveal the salient spectral and temporal features of the attended speaker, as if subjects were listening to that speaker alone. A simple classifier trained solely on examples of single speakers can decode both attended words and speaker identity. We find that task performance is well predicted by a rapid increase in attention-modulated neural selectivity across both single-electrode and population-level cortical responses. These findings demonstrate that the cortical representation of speech does not merely reflect the external acoustic environment, but instead gives rise to the perceptual aspects relevant for the listener’s intended goal.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました