强化学习
模型预测控制
计算机科学
理论(学习稳定性)
利用
控制器(灌溉)
控制(管理)
最优控制
控制理论(社会学)
数学优化
人工智能
机器学习
数学
农学
计算机安全
生物
作者
Mario Zanon,Sébastien Gros
出处
期刊:IEEE Transactions on Automatic Control
[Institute of Electrical and Electronics Engineers]
日期:2021-08-01
卷期号:66 (8): 3638-3652
被引量:99
标识
DOI:10.1109/tac.2020.3024161
摘要
Reinforcement Learning (RL) has recently impressed the world with stunning results in various applications. While the potential of RL is now well-established, many critical aspects still need to be tackled, including safety and stability issues. These issues, while partially neglected by the RL community, are central to the control community which has been widely investigating them. Model Predictive Control (MPC) is one of the most successful control techniques because, among others, of its ability to provide such guarantees even for uncertain constrained systems. Since MPC is an optimization-based technique, optimality has also often been claimed. Unfortunately, the performance of MPC is highly dependent on the accuracy of the model used for predictions. In this paper, we propose to combine RL and MPC in order to exploit the advantages of both and, therefore, obtain a controller which is optimal and safe. We illustrate the results with a numerical example in simulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI