计算机科学
构造(python库)
贝叶斯定理
计算模型
集合(抽象数据类型)
认知
背景(考古学)
人工智能
机器学习
认知科学
领域(数学)
功能(生物学)
元学习(计算机科学)
贝叶斯概率
心理学
任务(项目管理)
数学
神经科学
经济
古生物学
管理
程序设计语言
纯数学
生物
进化生物学
作者
Marcel Binz,Ishita Dasgupta,Akshay K. Jagadish,Matthew Botvinick,Jane X. Wang,Eric Schulz
标识
DOI:10.1017/s0140525x23003266
摘要
Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.
科研通智能强力驱动
Strongly Powered by AbleSci AI