强化学习
计算机科学
交叉口(航空)
概化理论
信号(编程语言)
人工智能
多智能体系统
功能(生物学)
机器学习
工程类
航空航天工程
数学
进化生物学
生物
统计
程序设计语言
作者
Liwen Zhu,Peixi Peng,Zongqing Lu,Yonghong Tian
出处
期刊:IEEE Transactions on Knowledge and Data Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-01-04
卷期号:35 (11): 11570-11584
被引量:15
标识
DOI:10.1109/tkde.2022.3232711
摘要
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city. Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent. However, there are still several challenges that may limit its large-scale application in the real world. On the one hand, the policy of the current traffic signal is often heavily influenced by its neighbor agents, and the coordination between the agent and its neighbors needs to be considered. Hence, the control of a road network composed of multiple traffic signals is naturally modeled as a multi-agent system, and all agents’ policies need to be optimized simultaneously. On the other hand, once the policy function is conditioned on not only the current agent's observation but also the neighbors’, the policy function would be closely related to the training scenario and cause poor generalizability because the agents in various scenarios often have heterogeneous neighbors. To make the policy learned from a training scenario generalizable to new unseen scenarios, a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method is proposed to learn the decentralized policy for each intersection that considers neighbor information in a latent way. Specifically, we formulate the policy learning as a meta-learning problem over a set of related tasks, where each task corresponds to traffic signal control at an intersection whose neighbors are regarded as the unobserved part of the state. Then, a learned latent variable is introduced to represent the task's specific information and is further brought into the policy for learning. In addition, to make the policy learning stable, a novel intrinsic reward is designed to encourage each agent's received rewards and observation transition to be predictable only conditioned on its own history. Extensive experiments conducted on CityFlow demonstrate that the proposed method substantially outperforms existing approaches and shows superior generalizability.
科研通智能强力驱动
Strongly Powered by AbleSci AI