计算机科学
稳健性(进化)
正规化(语言学)
深度学习
可预测性
算法
人工智能
合成数据
人工神经网络
趋同(经济学)
反向传播
数学
统计
基因
经济
化学
生物化学
经济增长
作者
Zhuo‐Xu Cui,Sen Jia,Jing Cheng,Qingyong Zhu,Yuanyuan Liu,Kankan Zhao,Ziwen Ke,Wenqi Huang,Haifeng Wang,Yanjie Zhu,Leslie Ying,Dong Liang
标识
DOI:10.1109/tmi.2023.3293826
摘要
In recent times, model-driven deep learning has evolved an iterative algorithm into a cascade network by replacing the regularizer's first-order information, such as the (sub)gradient or proximal operator, with a network module. This approach offers greater explainability and predictability compared to typical data-driven networks. However, in theory, there is no assurance that a functional regularizer exists whose first-order information matches the substituted network module. This implies that the unrolled network output may not align with the regularization models. Furthermore, there are few established theories that guarantee global convergence and robustness (regularity) of unrolled networks under practical assumptions. To address this gap, we propose a safeguarded methodology for network unrolling. Specifically, for parallel MR imaging, we unroll a zeroth-order algorithm, where the network module serves as a regularizer itself, allowing the network output to be covered by a regularization model. Additionally, inspired by deep equilibrium models, we conduct the unrolled network before backpropagation to converge to a fixed point and then demonstrate that it can tightly approximate the actual MR image. We also prove that the proposed network is robust against noisy interferences if the measurement data contain noise. Finally, numerical experiments indicate that the proposed network consistently outperforms state-of-the-art MRI reconstruction methods, including traditional regularization and unrolled deep learning techniques.
科研通智能强力驱动
Strongly Powered by AbleSci AI