计算机科学
利用
编码
变压器
嵌入
情态动词
情报检索
自然语言
建筑
人工智能
自然语言处理
物理
高分子化学
化学
电压
视觉艺术
艺术
基因
量子力学
生物化学
计算机安全
作者
Valentin Gabeur,Chen Sun,Karteek Alahari,Cordelia Schmid
标识
DOI:10.1007/978-3-030-58548-8_13
摘要
The task of retrieving video content relevant to natural language queries plays a critical role in effectively handling internet-scale datasets. Most of the existing methods for this caption-to-video retrieval problem do not fully exploit cross-modal cues present in video. Furthermore, they aggregate per-frame visual features with limited or no temporal information. In this paper, we present a multi-modal transformer to jointly encode the different modalities in video, which allows each of them to attend to the others. The transformer architecture is also leveraged to encode and model the temporal information. On the natural language side, we investigate the best practices to jointly optimize the language embedding together with the multi-modal transformer. This novel framework allows us to establish state-of-the-art results for video retrieval on three datasets. More details are available at http://thoth.inrialpes.fr/research/MMT .
科研通智能强力驱动
Strongly Powered by AbleSci AI