2025-10-27 マックス・プランク研究所
Web要約 の発言:

The scientists recorded participants’ brain activity via EEG and analyzed how strongly the brain synchronized with both the acoustic patterns and the linguistic patterns of each voice.
© MPI CBS
<関連情報>
- https://www.mpg.de/25612590/brain-activity-at-a-cocktail-party
- https://www.jneurosci.org/content/early/2025/10/23/JNEUROSCI.0657-25.2025
ターゲットストリームとディストラクタストリームへの注意関与は、複数話者環境における音声理解を予測する Attentional engagement with target and distractor streams predicts speech comprehension in multitalker environments
Alice Vivien Barchet, Andrea Bruera, Jasmin Wend, Johanna M. Rimmele, Jonas Obleser and Gesa Hartwigsen
Journal of Neuroscience Published:24 October 2025
DOI:https://doi.org/10.1523/JNEUROSCI.0657-25.2025
Abstract
Understanding speech while ignoring competing speech streams in the surrounding environment is challenging. Previous studies have demonstrated that attention shapes the neural representation of speech features. Attended streams are typically represented more strongly than unattended ones, suggesting either enhancement of the attended or suppression of the unattended stream. However, it is unclear how these complementary processes support attentional filtering and speech comprehension on different hierarchical levels. In this study, we used multivariate temporal response functions to analyze the EEG signals of 43 young adults (24 women), examining the relationship between the neural tracking of acoustic and higher-level linguistic features and a fine-grained speech comprehension measure. We show that the neural tracking of word and phoneme onsets and word-level linguistic features in the attended stream predicted comprehension at the individual single-trial level. Moreover, acoustic tracking of the ignored speech stream was positively correlated with comprehension performance, whereas word level linguistic neural tracking of the ignored stream was negatively correlated with comprehension. Collectively, our results suggest that attentional filtering during speech comprehension requires target enhancement as well as distractor suppression at different hierarchical levels.
Significance statement In social settings, speech comprehension is often challenged by the presence of multiple speakers talking simultaneously. The ability to focus on a relevant stream while ignoring irrelevant speech information in the background is crucial for successful and efficient interpersonal interactions. However, the precise neural mechanisms underlying this selective filtering process remain unclear. We establish the interplay of acoustic and higher-level information as objective markers of attentional selection and comprehensions success.


