一般化
计算机科学
领域(数学分析)
人工智能
分布(数学)
叠加原理
集合(抽象数据类型)
学习迁移
人工神经网络
数据集
图像(数学)
模式识别(心理学)
算法
数据挖掘
数学
数学分析
程序设计语言
作者
Fawu Wang,Ruizhe Li,Kang Zhang,Xia Yuan,Chunxia Zhao
标识
DOI:10.1109/mmsp55362.2022.9949199
摘要
Modern deep neural networks suffer from performance degradation when evaluated on testing data under different distributions from training data. The goal of out-of-distribution generalization is to solve this problem by learning transferable knowledge from source domains to generalize to invisible target domains. This paper presents a data augmentation method for out-of-distribution generalization. The main assumption is that the main data distribution of an image mostly contains domain-related information, such as color, illumination, texture content, etc, which hurts the domain shifts. To force the model to pay less attention to this part of the information, we propose a new data augmentation method based on the main distribution transition. Extensive experiments on two data set have demonstrated that the proposed method is able to achieve state-of-the-art performance for domain generalization. At the same time, our method can not only combine with other methods to produce a superposition generalization effect but also generate obfuscation data cheaply.
科研通智能强力驱动
Strongly Powered by AbleSci AI