アルゴリズムの「健忘症」を解くと、人間の学習方法のヒントが見えてくる(Solving algorithm ‘amnesia’ reveals clues to how we learn)

ad
ad

カリフォルニア大学の生物学者による発見は、認知機能障害に対処するのに役立つ可能性があります。 Finding by UCI biologists could help combat cognitive impairments

2022-07-05 カリフォルニア大学校アーバイン校(UCI)

アルゴリズムがより効率的に情報を学習・保持する方法についての発見が、新しい知識を吸収する脳の能力についての洞察をもたらす可能性があります。カリフォルニア大学アーバイン校生物科学部の研究者らによるこの発見は、認知機能障害への対処や技術の向上に役立つ可能性があります。
ANNとは、脳の神経細胞の動きを模倣して設計されたアルゴリズムで、人工神経回路網と呼ばれている。
ANNが、新しい知識に類似した項目を中心に、古い情報のかなり小さな部分集合をインターリーブすると、すでに知っていることを忘れることなくそれを学習できることがわかった
ANNは、以前に得た情報をすべて見直すことなく、新しい情報を非常に効率的に取り込むことができるようになりました。
この発見は、医療診断機器や自律走行車など、多くの機械のアルゴリズムをより正確で効率的にする可能性も提供しています。

<関連情報>

類似性重み付けインターリーブ学習による深層神経回路網と脳の学習。Learning in deep neural networks and brains with similarity-weighted interleaved learning

Rajat Saxena, Justin L. Shobe, and Bruce L. McNaughton
Proceedings of the National Academy of Sciences  Published:June 27, 2022
DOI:https://doi.org/10.1073/pnas.2115229119

Significance

Unlike humans, artificial neural networks rapidly forget previously learned information when learning something new and must be retrained by interleaving the new and old items; however, interleaving all old items is time-consuming and might be unnecessary. It might be sufficient to interleave only old items having substantial similarity to new ones. We show that training with similarity-weighted interleaving of old items with new ones allows deep networks to learn new items rapidly without forgetting, while using substantially less data. We hypothesize how similarity-weighted interleaving might be implemented in the brain using persistent excitability traces on recently active neurons and attractor dynamics. These findings may advance both neuroscience and machine learning.

Abstract

Understanding how the brain learns throughout a lifetime remains a long-standing challenge. In artificial neural networks (ANNs), incorporating novel information too rapidly results in catastrophic interference, i.e., abrupt loss of previously acquired knowledge. Complementary Learning Systems Theory (CLST) suggests that new memories can be gradually integrated into the neocortex by interleaving new memories with existing knowledge. This approach, however, has been assumed to require interleaving all existing knowledge every time something new is learned, which is implausible because it is time-consuming and requires a large amount of data. We show that deep, nonlinear ANNs can learn new information by interleaving only a subset of old items that share substantial representational similarity with the new information. By using such similarity-weighted interleaved learning (SWIL), ANNs can learn new information rapidly with a similar accuracy level and minimal interference, while using a much smaller number of old items presented per epoch (fast and data-efficient). SWIL is shown to work with various standard classification datasets (Fashion-MNIST, CIFAR10, and CIFAR100), deep neural network architectures, and in sequential learning frameworks. We show that data efficiency and speedup in learning new items are increased roughly proportionally to the number of nonoverlapping classes stored in the network, which implies an enormous possible speedup in human brains, which encode a high number of separate categories. Finally, we propose a theoretical model of how SWIL might be implemented in the brain.

生物工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました