吸引子
计算机科学
稳健性(进化)
计算
积分器
模块化设计
集合(抽象数据类型)
简单(哲学)
理论计算机科学
数学
算法
数学分析
计算机网络
生物化学
哲学
带宽(计算)
认识论
基因
化学
程序设计语言
操作系统
作者
Mikail Khona,Ila Fiete
标识
DOI:10.1038/s41583-022-00642-0
摘要
In this Review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, corrects errors and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long timescales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous-attractor dynamics have been concretely and rigorously identified. Thus, it is now possible to conclusively state that the brain constructs and uses such systems for computation. Finally, we highlight recent theoretical advances in understanding how the fundamental trade-offs between robustness and capacity and between structure and flexibility can be overcome by reusing and recombining the same set of modular attractors for multiple functions, so they together produce representations that are structurally constrained and robust but exhibit high capacity and are flexible. Attractor network dynamics can support several computations performed by the brain. In their Review, Khona and Fiete introduce different attractor dynamics and their computational utility, describe evidence of attractor networks across the brain and explain how such networks could be recombined to increase their flexibility and versatility.
科研通智能强力驱动
Strongly Powered by AbleSci AI