期刊:International Joint Conference on Neural Network日期:2020-07-19被引量:3
标识
DOI:10.1109/ijcnn48605.2020.9206820
摘要
Finding the optimal control strategy for traffic signals, especially for multi-intersection traffic signals, is still a difficult task. The use of reinforcement learning (RL) algorithms to this problem is greatly limited because of the partially observable and nonstationary environment. In this paper, we study how to eliminate the above influence from the environment through communication among agents. The proposed method, called Information Exchange Deep Q-Network (IEDQN), has a learning communication protocol, which makes each local agent pay unbalanced and asymmetric attention to other agents’ information. Besides the protocol, each agent has the ability to abstract local information from its own history data for interacting, which means that the communication can avoid the dependent instant information and it is robust to the potential time delay of communication. Specifically, by alleviating the effects of partial observation, experience replay can recover to good performance. We evaluate IEDQN via simulation experiments in the simulation of urban mobility (SUMO) in a traffic grid, and it outperforms the comparative multi-agent RL (MARL) methods in both efficiency and effectiveness.