隐藏字幕
计算机科学
地点
编码器
变压器
网格
人工智能
快照(计算机存储)
源代码
模式识别(心理学)
图像(数学)
数据库
几何学
物理
哲学
操作系统
量子力学
电压
语言学
数学
作者
Yiwei Ma,Jiayi Ji,Xiaoshuai Sun,Yiyi Zhou,Rongrong Ji
标识
DOI:10.1016/j.patcog.2023.109420
摘要
In this paper, we study the local visual modeling with grid features for image captioning, which is critical for generating accurate and detailed captions. To achieve this target, we propose a Locality-Sensitive Transformer Network (LSTNet) with two novel designs, namely Locality-Sensitive Attention (LSA) and Locality-Sensitive Fusion (LSF). LSA is deployed for the intra-layer interaction in Transformer via modeling the relationship between each grid and its neighbors. It reduces the difficulty of local object recognition during captioning. LSF is used for inter-layer information fusion, which aggregates the information of different encoder layers for cross-layer semantical complementarity. With these two novel designs, the proposed LSTNet can model the local visual information of grid features to improve the captioning quality. To validate LSTNet, we conduct extensive experiments on the competitive MS-COCO benchmark. The experimental results show that LSTNet is not only capable of local visual modeling, but also outperforms a bunch of state-of-the-art captioning models on offline and online testings, i.e., 134.8 CIDEr and 136.3 CIDEr, respectively. Besides, the generalization of LSTNet is also verified on the Flickr8k and Flickr30k datasets. The source code is available on GitHub: https://www.github.com/xmu-xiaoma666/LSTNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI