零(语言学)
非线性系统
无穷
数学
控制理论(社会学)
控制论中的H∞方法
控制(管理)
应用数学
算法
数学优化
计算机科学
数学分析
物理
人工智能
量子力学
哲学
语言学
作者
Jie Li,Shengbo Eben Li,Jingliang Duan,Yao Lyu,Wenjun Zou,Yang Guan,Yuming Yin
出处
期刊:IEEE Transactions on Automatic Control
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:69 (1): 426-433
标识
DOI:10.1109/tac.2023.3266277
摘要
Though policy evaluation error profoundly affects the direction of policy optimization and the convergence property, it is usually ignored in policy iteration methods. This work incorporates the practical inexact policy evaluation into a simultaneous policy update paradigm to reach the Nash equilibrium of the nonlinear zero-sum games. In the proposed algorithm, the restriction of precise policy evaluation is removed by bounded evaluation error characterized by Hamiltonian without sacrificing convergence guarantees. By exploiting Fréchet differential, the practical iterative process of value function with estimation error is converted into the Newton's method with variable steps, which are inversely proportional to evaluation errors. Accordingly, we construct a monotone scalar sequence that shares the same Newton's method with the value sequence to bound the error of the value function, which enjoys an exponential convergence rate. Numerical results show its convergence in affine systems, and the potential to cope with general nonlinear plants.
科研通智能强力驱动
Strongly Powered by AbleSci AI