稳健性(进化)
透视图(图形)
计算机科学
计算机视觉
傅里叶变换
人工智能
计算机图形学(图像)
数学
数学分析
生物化学
基因
化学
作者
Dong Yin,Raphael Gontijo Lopes,Jonathon Shlens,Ekin D. Cubuk,Justin Gilmer
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:230
标识
DOI:10.48550/arxiv.1906.08988
摘要
Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. Data augmentation is a commonly used approach for improving robustness, however robustness gains are typically not uniform across corruption types. Indeed increasing performance in the presence of random noise is often met with reduced performance on other corruptions such as contrast change. Understanding when and why these sorts of trade-offs occur is a crucial step towards mitigating them. Towards this end, we investigate recently observed trade-offs caused by Gaussian data augmentation and adversarial training. We find that both methods improve robustness to corruptions that are concentrated in the high frequency domain while reducing robustness to corruptions that are concentrated in the low frequency domain. This suggests that one way to mitigate these trade-offs via data augmentation is to use a more diverse set of augmentations. Towards this end we observe that AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C benchmark.
科研通智能强力驱动
Strongly Powered by AbleSci AI