计算机科学
人工智能
杠杆(统计)
计算机视觉
视频跟踪
自然语言处理
视频处理
作者
Weike Jin,Zhou Zhao,Xiaochun Cao,Jieming Zhu,Xiuqiang He,Yueting Zhuang
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:30: 5477-5489
被引量:12
标识
DOI:10.1109/tip.2021.3076556
摘要
Vision-language research has become very popular, which focuses on understanding of visual contents, language semantics and relationships between them. Video question answering (Video QA) is one of the typical tasks. Recently, several BERT style pre-training methods have been proposed and shown effectiveness on various vision-language tasks. In this work, we leverage the successful vision-language transformer structure to solve the Video QA problem. However, we do not pre-train it with any video data, because video pre-training requires massive computing resources and is hard to perform with only a few GPUs. Instead, our work aims to leverage image-language pre-training to help with video-language modeling, by sharing a common module design. We further introduce an adaptive spatio-temporal graph to enhance the vision-language representation learning. That is, we adaptively refine the spatio-temporal tubes of salient objects according to their spatio-temporal relations learned through a hierarchical graph convolution process. Finally, we can obtain a number of fine-grained tube-level video object representations, as the visual inputs of the vision-language transformer module. Experiments on three widely used Video QA datasets show that our model achieves the new state-of-the-art results.
科研通智能强力驱动
Strongly Powered by AbleSci AI