计算机科学
视听
多媒体
人机交互
自然语言处理
人工智能
语音识别
作者
Andrew Rouditchenko,Angie Boggust,David Harwath,Brian Chen,Dhiraj Joshi,Samuel Thomas,Kartik Audhkhasi,Hilde Kuehne,Rameswar Panda,Rogério Feris,Brian Kingsbury,Michael Picheny,Antonio Torralba,James Glass
标识
DOI:10.21437/interspeech.2021-1312
摘要
Current methods for learning visually grounded language from videos often rely on text annotation, such as human generated captions or machine generated automatic speech recognition (ASR) transcripts.In this work, we introduce the Audio-Video Language Network (AVLnet), a self-supervised network that learns a shared audiovisual embedding space directly from raw video inputs.To circumvent the need for text annotation, we learn audio-visual representations from randomly segmented video clips and their raw audio waveforms.We train AVLnet on HowTo100M, a large corpus of publicly available instructional videos, and evaluate on image retrieval and video retrieval tasks, achieving state-of-the-art performance.We perform analysis of AVLnet's learned representations, showing our model utilizes speech and natural sounds to learn audio-visual concepts.Further, we propose a trimodal model that jointly processes raw audio, video, and text captions from videos to learn a multi-modal semantic embedding space useful for text-video retrieval.Our code, data, and trained models will be released at avlnet.csail.mit.edu.
科研通智能强力驱动
Strongly Powered by AbleSci AI