计算机科学
隐藏字幕
编码器
解码方法
人工智能
情态动词
变压器
特征提取
编码(内存)
编码(集合论)
模式识别(心理学)
图像(数学)
数据挖掘
算法
电压
化学
物理
量子力学
高分子化学
操作系统
集合(抽象数据类型)
程序设计语言
作者
Jing Zhang,Yingshuai Xie,Weichao Ding,Zhe Wang
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-02-09
卷期号:33 (8): 4257-4268
被引量:18
标识
DOI:10.1109/tcsvt.2023.3243725
摘要
Numerous studies have shown that in-depth mining of correlations between multi-modal features can help improve the accuracy of cross-modal data analysis tasks. However, the current image description methods based on the encoder-decoder framework only carry out the interaction and fusion of multi-modal features in the encoding stage or the decoding stage, which cannot effectively alleviate the semantic gap. In this paper, we propose a Deep Fusion Transformer (DFT) for image captioning to provide a deep multi-feature and multi-modal information fusion strategy throughout the encoding to decoding process. We propose a novel global cross encoder to align different types of visual features, which can effectively compensate for the differences between features and incorporate each other's strengths. In the decoder, a novel cross on cross attention is proposed to realize hierarchical cross-modal data analysis, extending complex cross-modal reasoning capabilities through the multi-level interaction of visual and semantic features. Extensive experiments conducted on the MSCOCO dataset prove that our proposed DFT can achieve excellent performance and outperform state-of-the-art methods. The code is available at https://github.com/weimingboya/DFT .
科研通智能强力驱动
Strongly Powered by AbleSci AI