强化学习
计算机科学
控制器(灌溉)
控制理论(社会学)
控制工程
人工智能
工程类
控制(管理)
农学
生物
作者
Jia Liu,Yunduan Cui,Jianghua Duan,Zhengmin Jiang,Zhongming Pan,Kun Xu,Huiyun Li
出处
期刊:IEEE Transactions on Vehicular Technology
[Institute of Electrical and Electronics Engineers]
日期:2024-01-11
卷期号:73 (6): 7603-7615
被引量:3
标识
DOI:10.1109/tvt.2024.3352543
摘要
Autonomous vehicles have received considerable attention, yet high-speed path following control remains a critical and challenging issue. At high speeds, achieving perfect control based on an accurate dynamic model of autonomous vehicles is difficult due to the nonlinear model parameters that change at the driving limit. Moreover, the strong coupling relationship between longitudinal control and lateral control also affects control performance. To address these challenges, a reinforcement learning-based control approach (RL-controller) is developed in this paper and applied to the high-speed path following task. The proposed RL-controller overcomes the challenges of explicit modeling and tight coupling control. Firstly, to obtain efficient learning features, our RL-controller is developed using a deep soft actor-critic method that incorporates multi-layer dense connection networks with skip connections and feature reuse. Secondly, an estimation method of the lane curvature is developed, which is chosen as prior knowledge to augment the input state of reinforcement learning. We model the lane using a transformer-based network structure that utilizes a third-order polynomial and train the model on our self-collected dataset. Thirdly, intuitive reward functions are built from the cross-track error, the angle error and the sideslip angle. At last, we verify our proposed RL-controller utilizing an open autonomous driving simulator. Compared with imitation learning and model-based optimal control methods, our RL-controller outperforms them in terms of the driving speed and compound error.
科研通智能强力驱动
Strongly Powered by AbleSci AI