计算机科学
自动汇总
变压器
计算机视觉
人工智能
可扩展性
建筑
图像分辨率
工程类
地理
数据库
电压
电气工程
考古
作者
Junjue Wang,Zihang Chen,Ailong Ma,Yanfei Zhong
标识
DOI:10.1109/igarss46834.2022.9883199
摘要
Accurately describing high-spatial resolution remote sensing images requires the understanding the inner attributes of the objects and the outer relations between different objects. The existing image caption algorithms lack the ability of global representation, which are not fit for the summarization of complex scenes. To this end, we propose a pure transformer (CapFormer) architecture for remote sensing image caption. Specifically, a scalable vision transformer is adopted for image representation, where the global content can be captured with multi-head self-attention layers. A transformer decoder is designed to successively translate the image features into comprehensive sentences. The transformer decoder explicitly model the historical words and interact with the image features using cross-attention layers. The comprehensive and ablation experiments on RSICD dataset demonstrate that the CapFormer outperforms the state-of-the-art image caption methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI