可解释性
计算机科学
深度学习
人工智能
认知科学
心理理论
领域(数学)
光学(聚焦)
数据科学
心理学
认知
神经科学
物理
数学
纯数学
光学
作者
Jaan Aru,Aqeel Labash,Oriol Corcoll,Raúl Vicente
标识
DOI:10.1007/s10462-023-10401-x
摘要
Theory of Mind (ToM) is an essential ability of humans to infer the mental states of others. Here we provide a coherent summary of the potential, current progress, and problems of deep learning (DL) approaches to ToM. We highlight that many current findings can be explained through shortcuts. These shortcuts arise because the tasks used to investigate ToM in deep learning systems have been too narrow. Thus, we encourage researchers to investigate ToM in complex open-ended environments. Furthermore, to inspire future DL systems we provide a concise overview of prior work done in humans. We further argue that when studying ToM with DL, the research's main focus and contribution ought to be opening up the network's representations. We recommend researchers to use tools from the field of interpretability of AI to study the relationship between different network components and aspects of ToM.
科研通智能强力驱动
Strongly Powered by AbleSci AI