人工智能
计算机科学
特征(语言学)
编码器
特征学习
分割
模式识别(心理学)
代表(政治)
自编码
对象(语法)
目标检测
计算机视觉
自然语言处理
深度学习
语言学
哲学
政治
政治学
法学
操作系统
作者
Zhicheng Huang,Jiashi Feng,Chin-Pi Lu,Qibin Hou,Ming‐Ming Cheng,Dongmei Fu,Xiaohui Shen,Jiashi Feng
标识
DOI:10.1109/tpami.2023.3336525
摘要
Masked image modeling (MIM) has achieved promising results on various vision tasks. However, the limited discriminability of learned representation manifests there is still plenty to go for making a stronger vision learner. Towards this goal, we propose Contrastive Masked Autoencoders (CMAE), a new self-supervised pre-training method for learning more comprehensive and capable vision representations. By elaboratively unifying contrastive learning (CL) and masked image model (MIM) through novel designs, CMAE leverages their respective advantages and learns representations with both strong instance discriminability and local perceptibility. Specifically, CMAE consists of two branches where the online branch is an asymmetric encoder-decoder and the momentum branch is a momentum updated encoder. During training, the online encoder reconstructs original images from latent representations of masked images to learn holistic features. The momentum encoder, fed with the full images, enhances the feature discriminability via contrastive learning with its online counterpart. To make CL compatible with MIM, CMAE introduces two new components, i.e., pixel shifting for generating plausible positive views and feature decoder for complementing features of contrastive pairs. Thanks to these novel designs, CMAE effectively improves the representation quality and transfer performance over its MIM counterpart. CMAE achieves the state-of-the-art performance on highly competitive benchmarks of image classification, semantic segmentation and object detection. Notably, CMAE-Base achieves 85.3% top-1 accuracy on ImageNet and 52.5% mIoU on ADE20k, surpassing previous best results by 0.7% and 1.8% respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI