计算机科学
稳健性(进化)
人工智能
二次增长
算法
滤波器(信号处理)
感知器
频域
计算复杂性理论
模式识别(心理学)
人工神经网络
计算机视觉
生物化学
化学
基因
作者
Yongming Rao,Wenliang Zhao,Zheng Zhu,Jiwen Lu,Jie Zhou
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:187
标识
DOI:10.48550/arxiv.2107.00645
摘要
Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases. These models are generally based on learning interaction among spatial locations from raw data. The complexity of self-attention and MLP grows quadratically as the image size increases, which makes these models hard to scale up when high-resolution features are required. In this paper, we present the Global Filter Network (GFNet), a conceptually simple yet computationally efficient architecture, that learns long-term spatial dependencies in the frequency domain with log-linear complexity. Our architecture replaces the self-attention layer in vision transformers with three key operations: a 2D discrete Fourier transform, an element-wise multiplication between frequency-domain features and learnable global filters, and a 2D inverse Fourier transform. We exhibit favorable accuracy/complexity trade-offs of our models on both ImageNet and downstream tasks. Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness. Code is available at https://github.com/raoyongming/GFNet
科研通智能强力驱动
Strongly Powered by AbleSci AI