加速
计算机科学
时空
遮罩(插图)
自编码
人工智能
冗余(工程)
代表(政治)
特征学习
像素
模式识别(心理学)
机器学习
算法
深度学习
艺术
物理
量子力学
政治
政治学
法学
视觉艺术
操作系统
作者
Christoph Feichtenhofer,Haoqi Fan,Yanghao Li,Kaiming He
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:154
标识
DOI:10.48550/arxiv.2205.09113
摘要
This paper studies a conceptually simple extension of Masked Autoencoders (MAE) to spatiotemporal representation learning from videos. We randomly mask out spacetime patches in videos and learn an autoencoder to reconstruct them in pixels. Interestingly, we show that our MAE method can learn strong representations with almost no inductive bias on spacetime (only except for patch and positional embeddings), and spacetime-agnostic random masking performs the best. We observe that the optimal masking ratio is as high as 90% (vs. 75% on images), supporting the hypothesis that this ratio is related to information redundancy of the data. A high masking ratio leads to a large speedup, e.g., > 4x in wall-clock time or even more. We report competitive results on several challenging video datasets using vanilla Vision Transformers. We observe that MAE can outperform supervised pre-training by large margins. We further report encouraging results of training on real-world, uncurated Instagram data. Our study suggests that the general framework of masked autoencoding (BERT, MAE, etc.) can be a unified methodology for representation learning with minimal domain knowledge.
科研通智能强力驱动
Strongly Powered by AbleSci AI