欠定系统
计算机科学
稳健性(进化)
深度学习
人工智能
反问题
人工神经网络
可解释性
机器学习
深层神经网络
迭代重建
缩小
算法
模式识别(心理学)
数学
数学分析
基因
生物化学
化学
程序设计语言
作者
Martin Genzel,Jan Macdonald,Maximilian März
标识
DOI:10.1109/tpami.2022.3148324
摘要
In the past five years, deep learning methods have become state-of-the-art in solving various inverse problems. Before such approaches can find application in safety-critical fields, a verification of their reliability appears mandatory. Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks. In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts. The present article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems. This covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging (using the NYU-fastMRI dataset). Our main focus is on computing adversarial perturbations of the measurements that maximize the reconstruction error. A distinctive feature of our approach is the quantitative and qualitative comparison with total-variation minimization, which serves as a provably robust reference method. In contrast to previous findings, our results reveal that standard end-to-end network architectures are not only resilient against statistical noise, but also against adversarial perturbations. All considered networks are trained by common deep learning techniques, without sophisticated defense strategies.
科研通智能强力驱动
Strongly Powered by AbleSci AI