最近梯度学习法
凸性
正规化(语言学)
算法
数学
凸函数
凸优化
正多边形
操作员(生物学)
数学优化
近端梯度法
压缩传感
次导数
计算机科学
人工智能
生物化学
化学
几何学
抑制因子
转录因子
金融经济学
经济
基因
作者
Yating Xu,Ming Qu,Lijie Liu,G.R. Liu,Jian Zou
摘要
When the sparse regularizer is convex and its proximal operator has a closed‐form, first‐order iterative algorithms based on proximal operators can effectively solve the sparse optimization problems. Recently, plug‐and‐play (PnP) algorithms have achieved significant success by incorporating advanced denoisers to replace the proximal operators in iterative algorithms. However, convex sparse regularizers such as the ‐norm tend to underestimate the large values within the sparse solutions. In contrast, the convex non‐convex (CNC) sparse regularization enables the non‐convex regularizer while preserving the convexity of the objective function. In this paper, we propose several PnP algorithms for solving the CNC sparse regularization model and discuss their convergence properties. Specifically, we first derive the proximal operator for CNC sparse regularization in iterative form and subsequently integrate it with several first‐order algorithms to yield different PnP algorithms. Then, based on the monotone operator theory, we prove that the proposed PnP algorithms are convergent under the condition that the data‐fidelity term is strongly convex and the residuals of the denoisers are contractive. We also emphasized the significance of strong convexity in the data‐fidelity term for the CNC sparse regularization model. Additionally, we demonstrate the superiority of the proposed PnP algorithms through extensive image restoration tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI