计算机科学
杠杆(统计)
人工智能
遮罩(插图)
图像(数学)
编码(集合论)
模式识别(心理学)
简单(哲学)
任务(项目管理)
机器学习
视觉艺术
艺术
经济
集合(抽象数据类型)
管理
程序设计语言
哲学
认识论
作者
Zhenda Xie,Zheng Zhang,Yue Cao,Yutong Lin,Jianmin Bao,Zhuliang Yao,Qi Dai,Han Hu
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:3
标识
DOI:10.48550/arxiv.2111.09886
摘要
This paper presents SimMIM, a simple framework for masked image modeling. We simplify recently proposed related approaches without special designs such as block-wise masking and tokenization via discrete VAE or clustering. To study what let the masked image modeling task learn good representations, we systematically study the major components in our framework, and find that simple designs of each component have revealed very strong representation learning performance: 1) random masking of the input image with a moderately large masked patch size (e.g., 32) makes a strong pre-text task; 2) predicting raw pixels of RGB values by direct regression performs no worse than the patch classification approaches with complex designs; 3) the prediction head can be as light as a linear layer, with no worse performance than heavier ones. Using ViT-B, our approach achieves 83.8% top-1 fine-tuning accuracy on ImageNet-1K by pre-training also on this dataset, surpassing previous best approach by +0.6%. When applied on a larger model of about 650 million parameters, SwinV2-H, it achieves 87.1% top-1 accuracy on ImageNet-1K using only ImageNet-1K data. We also leverage this approach to facilitate the training of a 3B model (SwinV2-G), that by $40\times$ less data than that in previous practice, we achieve the state-of-the-art on four representative vision benchmarks. The code and models will be publicly available at https://github.com/microsoft/SimMIM.
科研通智能强力驱动
Strongly Powered by AbleSci AI