稳健性(进化)
计算机科学
人工智能
对抗制
修剪
深度学习
医学影像学
水准点(测量)
机器学习
深层神经网络
图像(数学)
计算机视觉
基因
地理
化学
农学
生物
生物化学
大地测量学
作者
Lun Chen,Lu Zhao,Calvin Yu‐Chian Chen
摘要
Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings.To improve the defense of the medical imaging system against adversarial examples, we propose a new model-based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model.Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X-ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model.Compared with the existing model-based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.
科研通智能强力驱动
Strongly Powered by AbleSci AI