亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework

计算机科学 人工智能 深度学习 地球观测 高光谱成像 卷积神经网络 合成孔径雷达 模态(人机交互) 土地覆盖 判别式 机器学习 模式识别(心理学) 土地利用 土木工程 航空航天工程 工程类 卫星
作者
Jing Yao,Bing Zhang,Chenyu Li,Danfeng Hong,Jocelyn Chanussot
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:61: 1-15 被引量:154
标识
DOI:10.1109/tgrs.2023.3284671
摘要

The recent success of attention mechanism-driven deep models, like Vision Transformer (ViT) as one of the most representative, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current Transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, abbreviated as ExViT, aiming at the task of land use and land cover classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full tokens-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral and light detection and ranging (LiDAR) data, and Berlin dataset with hyperspectral and synthetic aperture radar (SAR) data, to demonstrate that our ExViT outperforms concurrent competitors based on Transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Milesgao发布了新的文献求助20
2秒前
8秒前
miracle1005发布了新的文献求助10
12秒前
科研小白发布了新的文献求助10
13秒前
大模型应助九黎采纳,获得10
19秒前
点点zzz发布了新的文献求助30
22秒前
坦率的丹烟完成签到 ,获得积分10
22秒前
hesurina完成签到,获得积分10
26秒前
复杂的泥猴桃完成签到,获得积分10
26秒前
科研通AI5应助科研通管家采纳,获得10
27秒前
Ava应助点点zzz采纳,获得10
30秒前
科研通AI5应助科研小白采纳,获得10
32秒前
40秒前
44秒前
46秒前
九黎发布了新的文献求助10
46秒前
Akim应助yyyy采纳,获得10
47秒前
webmaster完成签到,获得积分10
52秒前
53秒前
科研小白发布了新的文献求助10
53秒前
噔噔完成签到,获得积分10
53秒前
大英留子千早爱音完成签到,获得积分10
59秒前
1分钟前
慕青应助科研小白采纳,获得10
1分钟前
ceeray23发布了新的文献求助20
1分钟前
1分钟前
愉快凡旋发布了新的文献求助10
1分钟前
1分钟前
1分钟前
科研小白发布了新的文献求助10
1分钟前
叶123完成签到,获得积分10
1分钟前
爱撒娇的无施完成签到,获得积分10
1分钟前
努力科研完成签到,获得积分10
1分钟前
Alex发布了新的文献求助30
1分钟前
1分钟前
吾日三省吾身完成签到 ,获得积分10
1分钟前
努力科研发布了新的文献求助10
2分钟前
小鱼完成签到,获得积分10
2分钟前
我是老大应助风止采纳,获得10
2分钟前
WizBLue发布了新的文献求助30
2分钟前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Structural Load Modelling and Combination for Performance and Safety Evaluation 1000
Conference Record, IAS Annual Meeting 1977 610
電気学会論文誌D(産業応用部門誌), 141 巻, 11 号 510
Virulence Mechanisms of Plant-Pathogenic Bacteria 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3561907
求助须知:如何正确求助?哪些是违规求助? 3135509
关于积分的说明 9412416
捐赠科研通 2835888
什么是DOI,文献DOI怎么找? 1558793
邀请新用户注册赠送积分活动 728452
科研通“疑难数据库(出版商)”最低求助积分说明 716865