光谱图
计算机科学
变压器
语音识别
电气工程
工程类
电压
作者
Alan Baade,Puyuan Peng,David Harwath
标识
DOI:10.21437/interspeech.2022-10961
摘要
In this paper, we propose a simple yet powerful improvement over the recent Self-Supervised Audio Spectrogram Transformer (SSAST) model for speech and audio classification.Specifically, we leverage the insight that the SSAST uses a very high masking ratio (75%) during pretraining, meaning that the vast majority of self-attention compute is performed on mask tokens.We address this by integrating the encoder-decoder architecture from Masked Autoencoders are Scalable Vision Learners (MAE) into the SSAST, where a deep encoder operates on only unmasked input, and a shallow decoder operates on encoder outputs and mask tokens.We find that MAE-like pretraining can provide a 3× speedup and 2× memory usage reduction over the vanilla SSAST using current audio pretraining strategies with ordinary model and input sizes.When finetuning on downstream tasks, which only uses the encoder, we find that our approach outperforms the SSAST on a variety of downstream tasks.We further conduct comprehensive evaluations into different strategies of pretraining and explore differences in MAE-style pretraining between the visual and audio domains.
科研通智能强力驱动
Strongly Powered by AbleSci AI