帕斯卡(单位)
计算机科学
分割
变压器
卷积神经网络
卷积码
人工智能
计算
编码
机器学习
模式识别(心理学)
算法
解码方法
程序设计语言
工程类
电气工程
基因
电压
化学
生物化学
作者
Meng-Hao Guo,Chin-Pi Lu,Qibin Hou,Zhengning Liu,Ming‐Ming Cheng,Shi‐Min Hu
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:151
标识
DOI:10.48550/arxiv.2209.08575
摘要
We present SegNeXt, a simple convolutional network architecture for semantic segmentation. Recent transformer-based models have dominated the field of semantic segmentation due to the efficiency of self-attention in encoding spatial information. In this paper, we show that convolutional attention is a more efficient and effective way to encode contextual information than the self-attention mechanism in transformers. By re-examining the characteristics owned by successful segmentation models, we discover several key components leading to the performance improvement of segmentation models. This motivates us to design a novel convolutional attention network that uses cheap convolutional operations. Without bells and whistles, our SegNeXt significantly improves the performance of previous state-of-the-art methods on popular benchmarks, including ADE20K, Cityscapes, COCO-Stuff, Pascal VOC, Pascal Context, and iSAID. Notably, SegNeXt outperforms EfficientNet-L2 w/ NAS-FPN and achieves 90.6% mIoU on the Pascal VOC 2012 test leaderboard using only 1/10 parameters of it. On average, SegNeXt achieves about 2.0% mIoU improvements compared to the state-of-the-art methods on the ADE20K datasets with the same or fewer computations. Code is available at https://github.com/uyzhang/JSeg (Jittor) and https://github.com/Visual-Attention-Network/SegNeXt (Pytorch).
科研通智能强力驱动
Strongly Powered by AbleSci AI