强化学习
弹道
水准点(测量)
计算机科学
马尔可夫决策过程
深度学习
轨迹优化
数学优化
时差学习
人工神经网络
加速度
人工智能
马尔可夫过程
最优控制
数学
天文
地理
物理
统计
经典力学
大地测量学
作者
Yanqiu Cheng,Xianbiao Hu,Kuanmin Chen,Xinlian Yu,Yulong Luo
标识
DOI:10.1080/15472450.2022.2046472
摘要
This manuscript presents an Adam optimization-based Deep Reinforcement Learning model for Mixed Traffic Flow control (ADRL-MTF), so as to guide Connected and Autonomous vehicle's (CAV) longitudinal trajectory on a typical urban roadway with signal-controlled intersections. Two improvements are made when compared with prior literatures. First, the common assumptions to simplify the problem solving, such as dividing a vehicle trajectory into several segments with constant acceleration/deceleration, are avoided, to improve the modeling realism. Second, built on the efficient Adam Optimization and Deep Q-Learning, the proposed model avoids the enumeration of states and actions, and is computational efficient and suitable for real time applications. The mixed traffic flow dynamic is firstly formulated as a finite Markov decision process (MDP) model. Due to the discretization of time, space and speed, this MDP model becomes high-dimensional in state, and is very challenging to solve. We then propose a temporal difference-based deep reinforcement learning approach, with ε-greedy for exploration-exploitation balance. Two neural networks are developed to replace the traditional Q function and generate the targets in the Q-learning update. These two neural networks are trained by the Adam optimization algorithm, which extends stochastic gradient descent and considers the second moments of the gradients, and is thus highly computational efficient and has lower memory requirements. The proposed model is shown to reduce fuel consumption by 7.8%, which outperforms a prior benchmark model based on Monte Carlo Tree Search. The model's runtime efficiency and stability are tested, and the sensitivity analysis is also performed.
科研通智能强力驱动
Strongly Powered by AbleSci AI