パーティー中の脳活動を分析(Brain activity at a party)

ad

2025-10-27 マックス・プランク研究所

Web要約 の発言:
マックス・プランク人間認知・脳科学研究所(MPI CBS)のVivien Barchet博士らは、複数の話者が同時に話す中で特定の声に集中する「カクテルパーティ問題」の神経メカニズムをEEG(脳波計測)で解析した。43人の被験者に2つの音声文を同時提示し、一方に注意を向けさせた結果、標的話者の音声パターンへの脳の同期が強いほど理解度が高かった。また、無視対象の音声の「音響的特徴」だけをわずかに追跡することは有益で、聴覚情報の分離を助けることが分かった。一方、意味や語の処理まで行うと注意が分散し、理解力が低下した。つまり、雑音環境下での聞き取りには、関連音声を強調しつつ、妨害音を意味認識の前段階で抑制する脳の動的バランスが不可欠であることを示した。本成果は、日常会話・聴覚障害補助技術への応用が期待される。

パーティー中の脳活動を分析(Brain activity at a party)
The scientists recorded participants’ brain activity via EEG and analyzed how strongly the brain synchronized with both the acoustic patterns and the linguistic patterns of each voice.
© MPI CBS

<関連情報>

ターゲットストリームとディストラクタストリームへの注意関与は、複数話者環境における音声理解を予測する Attentional engagement with target and distractor streams predicts speech comprehension in multitalker environments

Alice Vivien Barchet, Andrea Bruera, Jasmin Wend, Johanna M. Rimmele, Jonas Obleser and Gesa Hartwigsen
Journal of Neuroscience  Published:24 October 2025
DOI:https://doi.org/10.1523/JNEUROSCI.0657-25.2025

Abstract

Understanding speech while ignoring competing speech streams in the surrounding environment is challenging. Previous studies have demonstrated that attention shapes the neural representation of speech features. Attended streams are typically represented more strongly than unattended ones, suggesting either enhancement of the attended or suppression of the unattended stream. However, it is unclear how these complementary processes support attentional filtering and speech comprehension on different hierarchical levels. In this study, we used multivariate temporal response functions to analyze the EEG signals of 43 young adults (24 women), examining the relationship between the neural tracking of acoustic and higher-level linguistic features and a fine-grained speech comprehension measure. We show that the neural tracking of word and phoneme onsets and word-level linguistic features in the attended stream predicted comprehension at the individual single-trial level. Moreover, acoustic tracking of the ignored speech stream was positively correlated with comprehension performance, whereas word level linguistic neural tracking of the ignored stream was negatively correlated with comprehension. Collectively, our results suggest that attentional filtering during speech comprehension requires target enhancement as well as distractor suppression at different hierarchical levels.

Significance statement In social settings, speech comprehension is often challenged by the presence of multiple speakers talking simultaneously. The ability to focus on a relevant stream while ignoring irrelevant speech information in the background is crucial for successful and efficient interpersonal interactions. However, the precise neural mechanisms underlying this selective filtering process remain unclear. We establish the interplay of acoustic and higher-level information as objective markers of attentional selection and comprehensions success.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました