次梯度方法
数学优化
迭代学习控制
计算机科学
迭代法
功能(生物学)
趋同(经济学)
算法
数学
控制(管理)
人工智能
经济增长
进化生物学
生物
经济
作者
Xiaochun Dong,Ruikun Zhang,Xiaoxue Chen,Lin Xue
标识
DOI:10.1080/00207721.2024.2388810
摘要
In this paper, we study the distributed optimisation problem in an iterative environment, where the global objective function consists of agents' local objective functions, and each agent with the local objective function performs repeated tasks in finite time. The objective is to minimise the global objective function by the local communication of agents in the repeated running system. To solve this problem, we propose a distributed optimisation algorithm based on iterative learning methods that combines the terminal iterative learning strategy with the subgradient strategy. When the initial states of all agents are the same in each iteration, by the proposed algorithm, it is proved that all agents' states asymptotically converge to the optimal solution. Moreover, considering that the initial states of agents in each iteration may not be accurately measured, we further study the distributed optimisation problem under different initial states. We find that all agents' states asymptotically converge to the neighbourhood of the optimal solution. Finally, the effectiveness of the algorithm is verified by the numerical simulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI