运动规划
强化学习
计算机科学
交叉口(航空)
运动(物理)
人工智能
模糊逻辑
理论(学习稳定性)
路径(计算)
弹道
控制工程
模拟
工程类
机器学习
机器人
物理
航空航天工程
程序设计语言
天文
作者
Long Chen,Xuemin Hu,Bo Tang,Yu Cheng
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-04-01
卷期号:23 (4): 2966-2977
被引量:56
标识
DOI:10.1109/tits.2020.3025671
摘要
Motion planning is one of the most significant part in autonomous driving. Learning-based motion planning methods attract many researchers’ attention due to the abilities of learning from the environment and directly making decisions from the perception. The deep Q-network, as a popular reinforcement learning method, has achieved great progress in autonomous driving, but these methods seldom use the global path information to handle the issue of directional planning such as making a turning at an intersection since the agent usually learns driving strategies only by the designed reward function, which is difficult to adapt to the driving scenarios of urban roads. Moreover, different motion commands such as the steering wheel and accelerator are associated with each other from classic Q-networks, which easily leads to an unstable prediction of the motion commands since they are independently controlled in a practical driving system. In this paper, a conditional deep Q-network for directional planning is proposed and applied in end-to-end autonomous driving, where the global path is used to guide the vehicle to drive from the origination to the destination. To handle the dependency of different motion commands in Q-networks, we take use of the idea of fuzzy control and develop a defuzzification method to improve the stability of predicting the values of different motion commands. We conduct comprehensive experiments in the CARLA simulator and compare our method with the state-of-the-art methods. Experimental results demonstrate the proposed method achieves better learning performance and driving stability performance than other methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI