强化学习
马尔可夫决策过程
计算机科学
约束(计算机辅助设计)
国家(计算机科学)
过程(计算)
人工智能
马尔可夫过程
工程类
算法
数学
机械工程
统计
操作系统
作者
Ziqing Gu,Lingping Gao,Haitong Ma,Shengbo Eben Li,Sifa Zheng,Wei Jing,Junbo Chen
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-09-01
卷期号:24 (9): 9966-9983
被引量:5
标识
DOI:10.1109/tits.2023.3271642
摘要
Reinforcement learning (RL) has shown excellent performance in the sequential decision-making problem, where safety in the form of state constraints is of great significance in the design and application of RL. Simple constrained end-to-end RL methods might lead to significant failure in a complex system like autonomous vehicles. In contrast, some hierarchical RL (HRL) methods generate driving goals directly, which could be closely combined with motion planning. With safety requirements, some safe-enhanced RL methods add post-processing modules to avoid unsafe goals or achieve expectation-based safety, which accepts the existence of unsafe states and allows some violations of safe constraints. However, ensuring state safety is vital for autonomous vehicles. Therefore, this paper proposes a state-based safety enhancement method for autonomous driving via direct hierarchical reinforcement learning. Finally, we design a constrained reinforcement learner based on the State-based Constrained Markov Decision Process (SCMDP), where a learnable safety module could adjust the constraint strength adaptively. We integrate a dynamic module in the policy training and generate future goals considering safety, temporal-spatial continuity, and dynamic feasibility, which could eliminate dependence on the prior model. Simulations in the typical highway scenes with uncertainties show that the proposed method has better training performance, higher driving safety in interactive scenes, more decision intelligence in traffic congestions, and better economic driving ability on roads with changing slopes.
科研通智能强力驱动
Strongly Powered by AbleSci AI