隐藏字幕
计算机科学
背景(考古学)
遥感
水准点(测量)
稳健性(进化)
多样性(控制论)
编码器
人工智能
图像(数学)
地理
地图学
操作系统
基因
考古
化学
生物化学
作者
Qimin Cheng,Haiyan Huang,Yuan Xu,Yuzhuo Zhou,LI Huan-ying,Zhongyuan Wang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-19
被引量:23
标识
DOI:10.1109/tgrs.2022.3201474
摘要
Recently, the burgeoning demands for captioning-related applications have inspired great endeavors in the remote sensing community. However, current benchmark datasets are deficient in data volume, category variety, and description richness, which hinders the advancement of new remote sensing image captioning approaches, especially those based on deep learning. To overcome this limitation, we present a larger and more challenging benchmark dataset, termed NWPU-Captions. NWPU-Captions contains 157,500 sentences, with all 31,500 images annotated manually by 7 experienced volunteers. The superiority of NWPU-Captions over current publicly available benchmark datasets not only lies in its much larger scale but also in its wider coverage of complex scenes and the richness and variety of describing vocabularies. Further, a novel encoder-decoder architecture, multi-level and contextual attention network (MLCA-Net), is proposed. MLCA-Net employs a multi-level attention module to adaptively aggregate image features of specific spatial regions and scales and introduces a contextual attention module to explore the latent context hidden in remote sensing images. MLCA-Net improves the flexibility and diversity of the generated captions while keeping their accuracy and conciseness by exploring the properties of scale variations and semantic ambiguity. Finally, the effectiveness, robustness, and generalization of MLCA-Net are proved through extensive experiments on existing datasets and NWPU-Captions.
科研通智能强力驱动
Strongly Powered by AbleSci AI