一般化
计算机科学
双射
样品(材料)
人工智能
理论计算机科学
算法
数学
离散数学
色谱法
数学分析
化学
作者
Sharon Zhou,Jiequan Zhang,Hang Jiang,Torbjörn Lundh,Andrew Y. Ng
标识
DOI:10.1088/2632-2153/abd615
摘要
Abstract Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remains a highly adaptable method to evolving model architectures and varying amounts of data—in particular, extremely scarce amounts of available training data. In this paper, we present a novel method of applying Möbius transformations to augment input images during training. Möbius transformations are bijective conformal maps that generalize image translation to operate over complex inversion in pixel space. As a result, Möbius transformations can operate on the sample level and preserve data labels. We show that the inclusion of Möbius transformations during training enables improved generalization over prior sample-level data augmentation techniques such as cutout and standard crop-and-flip transformations, most notably in low data regimes.
科研通智能强力驱动
Strongly Powered by AbleSci AI