隐藏字幕
计算机科学
语法
判决
自然语言处理
特征(语言学)
人工智能
背景(考古学)
可解释性
词(群论)
语义学(计算机科学)
语音识别
语言学
图像(数学)
古生物学
哲学
生物
程序设计语言
作者
Jincan Deng,Liang Li,Beichen Zhang,Shuhui Wang,Zheng-Jun Zha,Qingming Huang
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-02-01
卷期号:32 (2): 880-892
被引量:36
标识
DOI:10.1109/tcsvt.2021.3063423
摘要
Video captioning is a challenging task that aims to generate linguistic description based on video content. Most methods only incorporate visual features (2D/3D) as input for generating visual and non-visual words in the caption. However, generating non-visual words usually depends more on sentence-context than visual features. The wrong non-visual words can reduce the sentence fluency and even change the meaning of sentence. In this paper, we propose a syntax-guided hierarchical attention network (SHAN), which leverages semantic and syntax cues to integrate visual and sentence-context features for captioning. First, a globally-dependent context encoder is designed to extract the global sentence-context feature that facilitates generating non-visual words. Then, we introduce hierarchical content attention and syntax attention to adaptively integrate features in terms of temporality and feature characteristics respectively. Content attention helps focus on time intervals related to the semantic of current word, while cross-modal syntax attention uses syntax information to model importance of different features for target word’s generation. Moreover, such hierarchical attention can enhance the model interpretability for captioning. Experiments on MSVD and MSR-VTT datasets show the comparable performance of our method compared with current methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI