强化学习
马尔可夫决策过程
弹道
计算机科学
软件部署
过程(计算)
任务(项目管理)
推论
理论(学习稳定性)
避碰
轨迹优化
人工智能
马尔可夫过程
机器学习
工程类
计算机安全
碰撞
系统工程
统计
物理
数学
天文
操作系统
作者
Fan Yang,Wenxuan Zhou,Zuxin Liu,Ding Zhao,David Held
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2310.06903
摘要
Safe Reinforcement Learning (RL) plays an important role in applying RL algorithms to safety-critical real-world applications, addressing the trade-off between maximizing rewards and adhering to safety constraints. This work introduces a novel approach that combines RL with trajectory optimization to manage this trade-off effectively. Our approach embeds safety constraints within the action space of a modified Markov Decision Process (MDP). The RL agent produces a sequence of actions that are transformed into safe trajectories by a trajectory optimizer, thereby effectively ensuring safety and increasing training stability. This novel approach excels in its performance on challenging Safety Gym tasks, achieving significantly higher rewards and near-zero safety violations during inference. The method's real-world applicability is demonstrated through a safe and effective deployment in a real robot task of box-pushing around obstacles.
科研通智能强力驱动
Strongly Powered by AbleSci AI