稳健性(进化)
计算机科学
人工智能
稳健性测试
机器学习
卷积神经网络
数据挖掘
生物化学
化学
基因
模糊逻辑
作者
Zhendong Liu,Shuwei Qian,Changhong Xia,Chongjun Wang
出处
期刊:Neural Networks
[Elsevier]
日期:2024-04-01
卷期号:172: 106091-106091
被引量:2
标识
DOI:10.1016/j.neunet.2023.12.045
摘要
As the deployment of artificial intelligence (AI) models in real-world settings grows, their open-environment robustness becomes increasingly critical. This study aims to dissect the robustness of deep learning models, particularly comparing transformer-based models against CNN-based models. We focus on unraveling the sources of robustness from two key perspectives: structural and process robustness. Our findings suggest that transformer-based models generally outperform convolution-based models in robustness across multiple metrics. However, we contend that these metrics may not wholly represent true model robustness, such as the mean of corruption error. To better understand the underpinnings of this robustness advantage, we analyze models through the lens of Fourier transform and game interaction. From our insights, we propose a calibrated evaluation metric for robustness against real-world data, and a blur-based method to enhance robustness performance. Our approach achieves state-of-the-art results, with mCE scores of 2.1% on CIFAR-10-C, 12.4% on CIFAR-100-C, and 24.9% on TinyImageNet-C.
科研通智能强力驱动
Strongly Powered by AbleSci AI