一般化
计算机科学
特征(语言学)
人工智能
机器学习
领域(数学分析)
强化学习
人工神经网络
多样性(控制论)
适应(眼睛)
模式识别(心理学)
数学
光学
物理
数学分析
哲学
语言学
作者
Kaiyang Zhou,Yongxin Yang,Yu Qiao,Tao Xiang
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:8
标识
DOI:10.48550/arxiv.2107.02053
摘要
Neural networks do not generalize well to unseen data with domain shifts -- a longstanding problem in machine learning and AI. To overcome the problem, we propose MixStyle, a simple plug-and-play, parameter-free module that can improve domain generalization performance without the need to collect more data or increase model capacity. The design of MixStyle is simple: it mixes the feature statistics of two random instances in a single forward pass during training. The idea is grounded by the finding from recent style transfer research that feature statistics capture image style information, which essentially defines visual domains. Therefore, mixing feature statistics can be seen as an efficient way to synthesize new domains in the feature space, thus achieving data augmentation. MixStyle is easy to implement with a few lines of code, does not require modification to training objectives, and can fit a variety of learning paradigms including supervised domain generalization, semi-supervised domain generalization, and unsupervised domain adaptation. Our experiments show that MixStyle can significantly boost out-of-distribution generalization performance across a wide range of tasks including image recognition, instance retrieval and reinforcement learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI