The free-energy principle: a rough guide to the brain?

显著性(神经科学) 亥姆霍兹自由能 自由能原理 熵(时间箭头) 功能(生物学) 认知科学 心理学 感知 自由意志 计算机科学 人工智能 统计物理学 认识论 物理 神经科学 机器学习 量子力学 进化生物学 生物 哲学
作者
Karl J. Friston
出处
期刊:Trends in Cognitive Sciences [Elsevier]
卷期号:13 (7): 293-301 被引量:1364
标识
DOI:10.1016/j.tics.2009.04.005
摘要

This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework. This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework. information divergence, information gain, cross or relative entropy is a non-commutative measure of the difference between two probability distributions. a measure of salience based on the divergence between the recognition and prior densities. It measures the information in the data that can be recognised. or posterior density is the probability distribution of causes or model parameters, given some data; i.e., a probabilistic mapping from observed data to causes. priors that are induced by hierarchical models; they provide constraints on the recognition density is the usual way but depend on the data. the average surprise of outcomes sampled from a probability distribution or density. A density with low entropy means, on average, the outcome is relatively predictable. a process is ergodic if its long term time-average converges to its ensemble average. Ergodic processes that evolve for a long time forget their initial states. an information theory measure that bounds (is greater than) the surprise on sampling some data, given a generative model. of motion cover the value of a variable, its motion, acceleration, jerk and higher orders of motion. A point in generalised coordinates corresponds to a path or trajectory over time. or forward model is a probabilistic mapping from causes to observed consequences (data). It is usually specified in terms of the likelihood of getting some data given their causes (parameters of a model) and priors on the parameters an optimisation scheme that finds a minimum of a function by changing its arguments in proportion to the negative of the gradient of the function at the current value. device or scheme that uses a generative model to furnish a recognition density. They learn hidden structure in data by optimising the parameters of generative models. the probability distribution or density on the causes of data that encode beliefs about those causes prior to observing the data. or approximating conditional density is an approximate probability distribution of the causes of data. It is the product of inference or inverting a generative model. the successive states of stochastic processes are governed by random effects. quantities which are sufficient to parameterise a probability density (e.g., mean and covariance of a Gaussian density). or self-information is the negative log-probability of an outcome. An improbable outcome is therefore surprising.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Falcon完成签到 ,获得积分10
刚刚
刚刚
方班术完成签到,获得积分10
刚刚
pyt发布了新的文献求助10
刚刚
游瑞涛完成签到 ,获得积分10
刚刚
zhaojian完成签到,获得积分10
1秒前
小丁要努力完成签到,获得积分10
1秒前
斗牛的番茄完成签到 ,获得积分10
1秒前
relax完成签到 ,获得积分10
2秒前
逢考必过发布了新的文献求助10
2秒前
sola完成签到,获得积分10
2秒前
Accept完成签到,获得积分10
2秒前
ASUKA完成签到,获得积分10
2秒前
小杨完成签到 ,获得积分10
3秒前
3秒前
方班术发布了新的文献求助10
3秒前
msy1998完成签到,获得积分10
4秒前
向往完成签到 ,获得积分10
4秒前
DragonAca完成签到,获得积分10
4秒前
土土完成签到 ,获得积分10
4秒前
jjc发布了新的文献求助10
4秒前
猪肉铺发布了新的文献求助10
4秒前
田様应助shuo采纳,获得10
5秒前
6秒前
甜美的青柏完成签到,获得积分10
6秒前
文艺悟空完成签到,获得积分10
6秒前
殷勤的帽子完成签到,获得积分10
6秒前
贤惠的煎蛋完成签到,获得积分10
6秒前
00115完成签到 ,获得积分10
6秒前
我准备好了完成签到 ,获得积分10
7秒前
msy1998发布了新的文献求助20
7秒前
7秒前
刘广清发布了新的文献求助100
7秒前
fqk完成签到,获得积分10
8秒前
8秒前
川上富江完成签到 ,获得积分10
8秒前
文静的从菡完成签到,获得积分10
8秒前
lizh187完成签到 ,获得积分10
8秒前
皮包医师发布了新的文献求助200
8秒前
闵SUGA完成签到,获得积分10
9秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of pharmaceutical excipients, Ninth edition 5000
Aerospace Standards Index - 2026 ASIN2026 3000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
T/SNFSOC 0002—2025 独居石精矿碱法冶炼工艺技术标准 600
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6043420
求助须知:如何正确求助?哪些是违规求助? 7805940
关于积分的说明 16239848
捐赠科研通 5189087
什么是DOI,文献DOI怎么找? 2776820
邀请新用户注册赠送积分活动 1759853
关于科研通互助平台的介绍 1643355