已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

A Multiscale Grouping Transformer With CLIP Latents for Remote Sensing Image Captioning

隐藏字幕 计算机科学 遥感 变压器 计算机视觉 人工智能 图像(数学) 计算机图形学(图像) 地质学 工程类 电气工程 电压
作者
Lingwu Meng,Jing Wang,Ran Meng,Yang Yang,Liang Xiao
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:62: 1-15 被引量:5
标识
DOI:10.1109/tgrs.2024.3385500
摘要

Recent progress has shown that integrating multiscale visual features with advanced Transformer architectures is a promising approach for remote sensing image captioning (RSIC). However, the lack of local modeling ability in self-attention may potentially lead to inaccurate contextual information. Moreover, the scarcity of trainable image-caption pairs poses challenges in effectively harnessing the semantic alignment between images and texts. To mitigate these issues, we propose a Multiscale Grouping Transformer with Contrastive Language-Image Pre-training (CLIP) latents (MG-Transformer) for RSIC. First of all, a CLIP image embedding and a set of region features are extracted within a Multi-level Feature Extraction module. To achieve a comprehensive image representation, a Semantic Correlation module is designed to integrate the image embedding and region features with an attention gate. Subsequently, the integrated image features are fed into a Transformer model. The Transformer encoder utilizes dilated convolutions with different dilation rates to obtain multiscale visual features. To enhance the local modeling ability of the self-attention mechanism in the encoder, we introduce a Global Grouping Attention mechanism. This mechanism incorporates a grouping operation into self-attention, allowing each attention head to focus on different contextual information. The Transformer decoder then adopts the Meshed Cross-Attention mechanism to establish relationships between various scales of visual features and text features. This facilitates the generation of captions for images by the decoder. Experimental results on three RSIC datasets demonstrate the superiority of the proposed MG-Transformer. The code will be publicly available at https://github.com/One-paper-luck/MG-Transformer.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
罗婕发布了新的文献求助10
1秒前
怡然的凌兰完成签到,获得积分10
3秒前
阔达静曼完成签到 ,获得积分10
3秒前
小马甲应助adai采纳,获得10
3秒前
科研通AI6.4应助梦or夢采纳,获得10
3秒前
虚幻的捕完成签到 ,获得积分10
10秒前
完美世界应助糖醋猪里脊采纳,获得10
10秒前
坚定铸海完成签到,获得积分10
12秒前
12秒前
xyjf15完成签到,获得积分10
14秒前
搜集达人应助宝儿柯察金采纳,获得10
15秒前
18秒前
踏实乐枫完成签到,获得积分20
18秒前
斯文败类应助水若冰寒采纳,获得10
18秒前
ange发布了新的文献求助10
19秒前
茉莉完成签到 ,获得积分10
21秒前
陈生完成签到,获得积分10
21秒前
22秒前
23秒前
01完成签到 ,获得积分10
23秒前
25秒前
O已w时o完成签到 ,获得积分10
27秒前
28秒前
snow2号完成签到 ,获得积分10
29秒前
31秒前
丰富沛山发布了新的文献求助10
31秒前
时光发布了新的文献求助20
31秒前
水若冰寒发布了新的文献求助10
31秒前
32秒前
SANWilsonT完成签到,获得积分10
33秒前
xiaofeixia完成签到 ,获得积分10
35秒前
大模型应助点凌蝶采纳,获得10
35秒前
36秒前
景三完成签到 ,获得积分10
37秒前
adai发布了新的文献求助10
39秒前
ding应助123123采纳,获得10
41秒前
钦钦完成签到,获得积分10
42秒前
咸柴发布了新的文献求助10
43秒前
科目三应助阿米巴ing采纳,获得10
45秒前
轻松含双完成签到 ,获得积分10
45秒前
高分求助中
Standards for Molecular Testing for Red Cell, Platelet, and Neutrophil Antigens, 7th edition 1000
HANDBOOK OF CHEMISTRY AND PHYSICS 106th edition 1000
ASPEN Adult Nutrition Support Core Curriculum, Fourth Edition 1000
Signals, Systems, and Signal Processing 610
脑电大模型与情感脑机接口研究--郑伟龙 500
GMP in Practice: Regulatory Expectations for the Pharmaceutical Industry 500
简明药物化学习题答案 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6298733
求助须知:如何正确求助?哪些是违规求助? 8115723
关于积分的说明 16990317
捐赠科研通 5360058
什么是DOI,文献DOI怎么找? 2847564
邀请新用户注册赠送积分活动 1825013
关于科研通互助平台的介绍 1679320