联营
计算机科学
人工智能
上下文图像分类
模式识别(心理学)
目标检测
残余物
变压器
建筑
机器学习
计算机视觉
图像(数学)
算法
工程类
艺术
视觉艺术
电压
电气工程
作者
Yanghao Li,Chao-Yuan Wu,Haoqi Fan,Karttikeya Mangalam,Bo Xiong,Jitendra Malik,Christoph Feichtenhofer
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:3
标识
DOI:10.48550/arxiv.2112.01526
摘要
In this paper, we study Multiscale Vision Transformers (MViTv2) as a unified architecture for image and video classification, as well as object detection. We present an improved version of MViT that incorporates decomposed relative positional embeddings and residual pooling connections. We instantiate this architecture in five sizes and evaluate it for ImageNet classification, COCO detection and Kinetics video recognition where it outperforms prior work. We further compare MViTv2s' pooling attention to window attention mechanisms where it outperforms the latter in accuracy/compute. Without bells-and-whistles, MViTv2 has state-of-the-art performance in 3 domains: 88.8% accuracy on ImageNet classification, 58.7 boxAP on COCO object detection as well as 86.1% on Kinetics-400 video classification. Code and models are available at https://github.com/facebookresearch/mvit.
科研通智能强力驱动
Strongly Powered by AbleSci AI