Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework

计算机科学 人工智能 深度学习 地球观测 高光谱成像 卷积神经网络 合成孔径雷达 模态(人机交互) 土地覆盖 判别式 机器学习 模式识别(心理学) 土地利用 土木工程 航空航天工程 工程类 卫星
作者
Jing Yao,Bing Zhang,Chenyu Li,Danfeng Hong,Jocelyn Chanussot
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:61: 1-15 被引量:174
标识
DOI:10.1109/tgrs.2023.3284671
摘要

The recent success of attention mechanism-driven deep models, like Vision Transformer (ViT) as one of the most representative, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current Transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, abbreviated as ExViT, aiming at the task of land use and land cover classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full tokens-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral and light detection and ranging (LiDAR) data, and Berlin dataset with hyperspectral and synthetic aperture radar (SAR) data, to demonstrate that our ExViT outperforms concurrent competitors based on Transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
乐乐应助欧阳正义采纳,获得30
2秒前
4秒前
犇骉完成签到,获得积分10
5秒前
7秒前
7秒前
swallow发布了新的文献求助10
8秒前
9秒前
9秒前
Robby应助我先睡了采纳,获得30
10秒前
可可萝oxo发布了新的文献求助10
10秒前
14秒前
丁元英完成签到,获得积分10
15秒前
飞飞完成签到,获得积分10
16秒前
受伤灵薇完成签到,获得积分10
16秒前
Orange应助优美机器猫采纳,获得30
16秒前
李伟完成签到,获得积分10
17秒前
18秒前
18秒前
ZJING9完成签到,获得积分10
19秒前
善良的背包完成签到,获得积分10
20秒前
fangchenxi发布了新的文献求助20
20秒前
o2ptf6发布了新的文献求助10
22秒前
好运滚滚来完成签到 ,获得积分10
23秒前
LDQ发布了新的文献求助10
26秒前
打打应助Ricky采纳,获得10
27秒前
天天快乐应助12234hao采纳,获得10
27秒前
充电宝应助ll采纳,获得10
28秒前
28秒前
科研通AI5应助念姬采纳,获得10
29秒前
30秒前
CR7完成签到,获得积分0
31秒前
o2ptf6完成签到,获得积分10
32秒前
张嘴发布了新的文献求助10
32秒前
lyz完成签到,获得积分10
33秒前
37秒前
科研通AI5应助LDQ采纳,获得10
38秒前
ZAL发布了新的文献求助30
38秒前
39秒前
李尚泽完成签到,获得积分10
40秒前
高分求助中
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Technical Brochure TB 814: LPIT applications in HV gas insulated switchgear 1000
Immigrant Incorporation in East Asian Democracies 600
Nucleophilic substitution in azasydnone-modified dinitroanisoles 500
不知道标题是什么 500
A Preliminary Study on Correlation Between Independent Components of Facial Thermal Images and Subjective Assessment of Chronic Stress 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3966882
求助须知:如何正确求助?哪些是违规求助? 3512358
关于积分的说明 11162784
捐赠科研通 3247203
什么是DOI,文献DOI怎么找? 1793752
邀请新用户注册赠送积分活动 874602
科研通“疑难数据库(出版商)”最低求助积分说明 804432