隐藏字幕
计算机科学
卷积神经网络
特征(语言学)
人工智能
水准点(测量)
背景(考古学)
编码(内存)
判决
模式识别(心理学)
频道(广播)
图层(电子)
图像(数学)
空间语境意识
地图学
古生物学
计算机网络
哲学
语言学
化学
有机化学
生物
地理
作者
Long Chen,Hanwang Zhang,Jun Xiao,Liqiang Nie,Jian Shao,Wei Liu,Tat‐Seng Chua
出处
期刊:Cornell University - arXiv
日期:2016-01-01
被引量:37
标识
DOI:10.48550/arxiv.1611.05594
摘要
Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism --- a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI