强化学习
运动(物理)
集体运动
基于Agent的模型
计算机科学
人工智能
人工神经网络
过程(计算)
集体行为
钢筋
先验与后验
动力学(音乐)
心理学
社会心理学
社会学
认识论
操作系统
哲学
教育学
人类学
作者
Xin Wang,Shuo Liu,Yifan Yu,Shengzhi Yue,Ying Liu,Fumin Zhang,Yuanshan Lin
标识
DOI:10.1016/j.ecolmodel.2022.110259
摘要
Complex collective motion patterns can emerge from very simple local interactions among individual agents. However, it is still unclear how and why the interactions among individuals lead to the emergence of collective motion. Modeling is an effective way to understand the mechanisms that govern collective animal motions. In this work, to avoid imposing fixed sets of rules on collective motion models a priori as classical approaches do, we propose a new method of modeling collective motion for fish schooling via multi-agent reinforcement learning. We model each fish individual as an artificial learning agent, whose policy is acquired by using mean field Q-learning (MFQ). The observation of each fish agent is represented as a multi-channel image, where each channel describes a different feature, such as an agent's position or an agent's orientation. The policy of an agent is approximated with a neural network trained with the MFQ algorithm, during which, agents are rewarded (or penalized) according to the number of neighbors and consecutive collisions between individuals. We study the dynamics of collective motion that emerge from the learned policy. The experimental results show that the learned policy can produce collective motion in groups of various sizes. In addition, three different collective motion patterns observed in nature emerged during the training process. The learned policy can help us gain new insight into how and why individual interactions lead to collective motion. This study also demonstrates that multi-agent reinforcement learning has great potential to be a new approach for analysis and modeling of collective motion.
科研通智能强力驱动
Strongly Powered by AbleSci AI