光谱图
计算机科学
编码器
语音识别
解码
人工智能
遮罩(插图)
变压器
解码方法
模式识别(心理学)
算法
艺术
物理
量子力学
电压
视觉艺术
操作系统
作者
Po-Yao Huang,Xu Hu,Juncheng Li,Alexei Baevski,Michael Auli,Wojciech Galuba,Florian Metze,Christoph Feichtenhofer
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:90
标识
DOI:10.48550/arxiv.2207.06405
摘要
This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms. Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art performance on six audio and speech classification tasks, outperforming other recent models that use external supervised pre-training. The code and models will be at https://github.com/facebookresearch/AudioMAE.
科研通智能强力驱动
Strongly Powered by AbleSci AI