计算机科学
稳健性(进化)
卷积神经网络
人工智能
分割
对抗制
图像分割
机器学习
水准点(测量)
领域(数学分析)
模式识别(心理学)
数据挖掘
基因
地理
化学
数学分析
大地测量学
生物化学
数学
作者
Chen Chen,Zeju Li,Chuying Ouyang,Matthew Sinclair,Wenjia Bai,Daniel Rueckert
标识
DOI:10.1007/978-3-031-16443-9_15
摘要
Convolutional neural networks (CNNs) have achieved remarkable segmentation accuracy on benchmark datasets where training and test sets are from the same domain, yet their performance can degrade significantly on unseen domains, which hinders the deployment of CNNs in many clinical scenarios. Most existing works improve model out-of-domain (OOD) robustness by collecting multi-domain datasets for training, which is expensive and may not always be feasible due to privacy and logistical issues. In this work, we focus on improving model robustness using a single-domain dataset only. We propose a novel data augmentation framework called MaxStyle, which maximizes the effectiveness of style augmentation for model OOD performance. It attaches an auxiliary style-augmented image decoder to a segmentation network for robust feature learning and data augmentation. Importantly, MaxStyle augments data with improved image style diversity and hardness, by expanding the style space with noise and searching for the worst-case style composition of latent features via adversarial training. With extensive experiments on multiple public cardiac and prostate MR datasets, we demonstrate that MaxStyle leads to significantly improved out-of-distribution robustness against unseen corruptions as well as common distribution shifts across multiple, different, unseen sites and unknown image sequences under both low- and high-training data settings. The code can be found at https://github.com/cherise215/MaxStyle .
科研通智能强力驱动
Strongly Powered by AbleSci AI