数学
近端梯度法
收敛速度
可微函数
多目标优化
数学优化
趋同(经济学)
凸函数
梯度下降
应用数学
帕累托原理
正多边形
计算机科学
纯数学
人工神经网络
几何学
经济增长
机器学习
计算机网络
频道(广播)
经济
作者
Hiroki Tanabe,Ellen H. Fukuda,Nobuo Yamashita
出处
期刊:Optimization Letters
[Springer Science+Business Media]
日期:2022-04-24
卷期号:17 (2): 333-350
被引量:16
标识
DOI:10.1007/s11590-022-01877-7
摘要
Many descent algorithms for multiobjective optimization have been developed in the last two decades. Tanabe et al. (Comput Optim Appl 72(2):339--361, 2019) proposed a proximal gradient method for multiobjective optimization, which can solve multiobjective problems, whose objective function is the sum of a continuously differentiable function and a closed, proper, and convex one. Under reasonable assumptions, it is known that the accumulation points of the sequences generated by this method are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for the multiobjective proximal gradient method, matching what is known in scalar optimization. More specifically, by using merit functions to measure the complexity, we present the convergence rates for non-convex ($O(\sqrt{1 / k})$), convex ($O(1 / k)$), and strongly convex ($O(r^k)$ for some $r \in (0, 1)$) problems. We also extend the so-called Polyak-{\L}ojasiewicz (PL) inequality for multiobjective optimization and establish the linear convergence rate for multiobjective problems that satisfy such inequalities ($O(r^k)$ for some $r \in (0, 1)$).
科研通智能强力驱动
Strongly Powered by AbleSci AI