计算机科学
隐藏字幕
平滑的
人工智能
可视化
变压器
空间关系
模棱两可
判决
图形
计算机视觉
理论计算机科学
物理
量子力学
电压
图像(数学)
程序设计语言
作者
Liang Li,Xingyu Gao,Jincan Deng,Yunbin Tu,Zheng-Jun Zha,Qingming Huang
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:31: 2726-2738
被引量:24
标识
DOI:10.1109/tip.2022.3158546
摘要
Video captioning aims to generate a natural language sentence to describe the main content of a video. Since there are multiple objects in videos, taking full exploration of the spatial and temporal relationships among them is crucial for this task. The previous methods wrap the detected objects as input sequences, and leverage vanilla self-attention or graph neural network to reason about visual relations. This cannot make full use of the spatial and temporal nature of a video, and suffers from the problems of redundant connections, over-smoothing, and relation ambiguity. In order to address the above problems, in this paper we construct a long short-term graph (LSTG) that simultaneously captures short-term spatial semantic relations and long-term transformation dependencies. Further, to perform relational reasoning over the LSTG, we design a global gated graph reasoning module (G3RM), which introduces a global gating based on global context to control information propagation between objects and alleviate relation ambiguity. Finally, by introducing G3RM into Transformer instead of self-attention, we propose the long short-term relation transformer (LSRT) to fully mine objects' relations for caption generation. Experiments on MSVD and MSR-VTT datasets show that the LSRT achieves superior performance compared with state-of-the-art methods. The visualization results indicate that our method alleviates problem of over-smoothing and strengthens the ability of relational reasoning.
科研通智能强力驱动
Strongly Powered by AbleSci AI