强化学习
计算机科学
初始化
人工智能
机器学习
信号(编程语言)
步伐
控制(管理)
趋同(经济学)
大地测量学
经济增长
经济
程序设计语言
地理
作者
Min Wang,Libing Wu,Jianxin Li,Dan Wu,Chao Ma
标识
DOI:10.1109/ijcnn55064.2022.9892538
摘要
Reinforcement learning has been applied to various decision-making tasks and has achieved high profile successes. More and more studies have proposed to use reinforcement learning (RL) for traffic signal control to improve transportation efficiency. However, these methods suffer from a major exploration problem, and their performance is particularly poor. And even fail to quickly converge during the initial stage when interacting with the environment. To overcome this problem, we propose an RL model for traffic signal control based on demonstration data, which provides prior expert knowledge before RL model training. The demonstrations are collected from the classic method self-organizing traffic light (SOTL). It not only serves as expert knowledge but also explores and improves the entire decision-making system. Specifically, we use small demonstration data sets to pre-train the Ape-X Deep Q-learning Network (DQ N) for traffic signal control. When training a RL model from scratch, we often need a lot of data and time to learn a better initialization. Our approach is dedicated to making the RL algorithm converge quickly and accelerating the pace of learning. Extensive experiments on three urban datasets confirm that our method performs better with faster convergence and least travel time than the current RL-based methods by an average of 23.9%, 23.8%, 11.6%
科研通智能强力驱动
Strongly Powered by AbleSci AI