ディープニューラルネットワークは人間の聴覚モデルとして有望である(Deep neural networks show promise as models of human hearing)

ad

2023-12-13 マサチューセッツ工科大学(MIT)

◆MITの研究者らが行った最新の研究によれば、人間の聴覚システムの構造と機能を模倣した計算モデルが、より優れた補聴器、人工耳蝸電極、脳-機械インターフェースの設計に役立つ可能性があります。
◆この研究では、機械学習から派生した現代の計算モデルが、人間が同じ音を聴いているときの脳内表現と類似した内部表現を生成することが示されました。これにより、これらのモデルが脳の機能を記述するために使用できる可能性が高まっています。

<関連情報>

すべてのディープニューラルネットワークオーディオモデルではないが、多くが脳の反応を捉え、モデルステージと脳部位との対応関係を示す Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions

Greta Tuckute,Jenelle Feather,Dana Boebinger,Josh H. McDermott
PLOS Biology  Published: December 13, 2023
DOI:https://doi.org/10.1371/journal.pbio.3002366

ディープニューラルネットワークは人間の聴覚モデルとして有望である(Deep neural networks show promise as models of human hearing)

Abstract

Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.

生物工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました