隐藏字幕
计算机科学
变压器
人工智能
编码器
光学(聚焦)
特征(语言学)
解码方法
过程(计算)
计算机视觉
特征提取
图像(数学)
模式识别(心理学)
哲学
物理
量子力学
电压
光学
操作系统
电信
语言学
摘要
Attention mechanism in image captioning model can help model focus on relative regions while generating caption. However, existing attention mechanisms are unable to identify important regions and important visual features in images. This problem makes models sometimes pay excessive attention to non-important regions and non-important features in the process of generating image captions, which makes model generate coarse-grained and even wrong image captions. To address this problem, we propose an “Importance Discrimination Attention” (IDA) module, which could discriminate important feature and non-important features and reduce the possibility of misleading by non-important features in the process of generating image captions. We also propose a IDA-based image captioning model IDANet, which is completely based on transformer framework. The encoder of IDANet consists of two parts, one is pretrained Vision Transformer (VIT), which is used to extract visual features in a fast way. The other is refining module which is added into encoder to model position and semantic relationships of different grids. For the decoder, we propose IDA-Decoder which has similar framework with transformer decoder. IDA-Decoder is guided by IDA to focus on crucial regions and features instead of all regions and features while generating image caption. Compared with others attention mechanism, IDA could capture semantic relevance of important regions with other regions in a fine-grained and high-efficient way. The caption generated by IDANet could accurately capture the relevance of different objects and discriminate objects that have similar size and shape. The performance on MSCOCO “Karpathy” offline test split achieves 132.0 CIDEr-D score and 40.3 BLEU-4 score.
科研通智能强力驱动
Strongly Powered by AbleSci AI