失败
计算机科学
人工智能
计算复杂性理论
稳健性(进化)
分割
卷积神经网络
模式识别(心理学)
变压器
算法
机器学习
并行计算
化学
电压
物理
基因
量子力学
生物化学
作者
Shoufa Chen,Enze Xie,Chongjian Ge,Runjian Chen,Ding Liang,Ping Luo
标识
DOI:10.1109/tpami.2023.3303397
摘要
This article presents a simple yet effective multilayer perceptron (MLP) architecture, namely CycleMLP, which is a versatile neural backbone network capable of solving various tasks of dense visual predictions such as object detection, segmentation, and human pose estimation. Compared to recent advanced MLP architectures such as MLP-Mixer (Tolstikhin et al. 2021), ResMLP (Touvron et al. 2021), and gMLP (Liu et al. 2021), whose architectures are sensitive to image size and are infeasible in dense prediction tasks, CycleMLP has two appealing advantages: 1) CycleMLP can cope with various spatial sizes of images; 2) CycleMLP achieves linear computational complexity with respect to the image size by using local windows. In contrast, previous MLPs have $O(N^{2})$ computational complexity due to their full connections in space. 3) The relationship between convolution, multi-head self-attention in Transformer, and CycleMLP are discussed through an intuitive theoretical analysis. We build a family of models that can surpass state-of-the-art MLP and Transformer models e.g., Swin Transformer (Liu et al. 2021), while using fewer parameters and FLOPs. CycleMLP expands the MLP-like models’ applicability, making them versatile backbone networks that achieve competitive results on dense prediction tasks For example, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20 K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI