数学
共轭梯度法
梯度下降
行搜索
非线性共轭梯度法
算法
平滑的
梯度法
数学优化
信任域
凸函数
凸优化
序列(生物学)
最优化问题
缩小
共轭残差法
正多边形
计算机科学
人工智能
路径(计算)
几何学
人工神经网络
统计
计算机安全
生物
半径
遗传学
程序设计语言
作者
Gonglin Yuan,Zengxin Wei,Guoyin Li
标识
DOI:10.1016/j.cam.2013.04.032
摘要
The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods is mainly restricted to solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient-type methods to solve nonsmooth optimization problems. By using the Moreau–Yosida regulation (smoothing) approach and a nonmonotone line search technique, we propose a modified Polak–Ribière–Polyak (PRP) CG algorithm for solving a nonsmooth unconstrained convex minimization problem. Our algorithm possesses the following three desired properties. (i) The search direction satisfies the sufficient descent property and belongs to a trust region automatically; (ii) the search direction makes use of not only the gradient information but also the function value information; and (iii) the algorithm inherits an important property of the well-known PRP method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. Under standard conditions, we show that the algorithm converges globally to an optimal solution. Numerical experiment shows that our algorithm is effective and suitable for solving large-scale nonsmooth unconstrained convex optimization problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI