帕累托原理
数学优化
强化学习
计算机科学
多目标优化
帕累托最优
摄动(天文学)
最优化问题
梯度法
数学
人工智能
量子力学
物理
作者
Zhuan Zhou,Ming Huang,Feiyang Pan,Jing He,Xiang Ao,Dandan Tu,Qiang He
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2023-06-26
卷期号:37 (9): 11443-11451
被引量:2
标识
DOI:10.1609/aaai.v37i9.26353
摘要
Constrained Reinforcement Learning (CRL) burgeons broad interest in recent years, which pursues maximizing long-term returns while constraining costs. Although CRL can be cast as a multi-objective optimization problem, it is still facing the key challenge that gradient-based Pareto optimization methods tend to stick to known Pareto-optimal solutions even when they yield poor returns (e.g., the safest self-driving car that never moves) or violate the constraints (e.g., the record-breaking racer that crashes the car). In this paper, we propose Gradient-adaptive Constrained Policy Optimization (GCPO for short), a novel Pareto optimization method for CRL with two adaptive gradient recalibration techniques. First, to find Pareto-optimal solutions with balanced performance over all targets, we propose gradient rebalancing which forces the agent to improve more on under-optimized objectives at every policy iteration. Second, to guarantee that the cost constraints are satisfied, we propose gradient perturbation that can temporarily sacrifice the returns for costs. Experiments on the SafetyGym benchmarks show that our method consistently outperforms previous CRL methods in reward while satisfying the constraints.
科研通智能强力驱动
Strongly Powered by AbleSci AI