计算机科学
人工智能
分割
水准点(测量)
卷积神经网络
目标检测
模式识别(心理学)
核(代数)
机器学习
计算机视觉
大地测量学
数学
组合数学
地理
作者
Meng-Hao Guo,Cheng-Ze Lu,Zheng-Ning Liu,Ming‐Ming Cheng,Shi‐Min Hu
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:7
标识
DOI:10.48550/arxiv.2202.09741
摘要
While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel linear attention named large kernel attention (LKA) to enable self-adaptive and long-range correlations in self-attention while avoiding its shortcomings. Furthermore, we present a neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN surpasses similar size vision transformers(ViTs) and convolutional neural networks(CNNs) in various tasks, including image classification, object detection, semantic segmentation, panoptic segmentation, pose estimation, etc. For example, VAN-B6 achieves 87.8% accuracy on ImageNet benchmark and set new state-of-the-art performance (58.2 PQ) for panoptic segmentation. Besides, VAN-B2 surpasses Swin-T 4% mIoU (50.1 vs. 46.1) for semantic segmentation on ADE20K benchmark, 2.6% AP (48.8 vs. 46.2) for object detection on COCO dataset. It provides a novel method and a simple yet strong baseline for the community. Code is available at https://github.com/Visual-Attention-Network.
科研通智能强力驱动
Strongly Powered by AbleSci AI