已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Multimodal Remote Sensing Image Segmentation With Intuition-Inspired Hypergraph Modeling

计算机科学 超图 分割 人工智能 语义学(计算机科学) 模式识别(心理学) 数学 程序设计语言 离散数学
作者
Qibin He,Xian Sun,Wenhui Diao,Zhiyuan Yan,Fanglong Yao,Kun Fu
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:32: 1474-1487 被引量:69
标识
DOI:10.1109/tip.2023.3245324
摘要

Multimodal remote sensing (RS) image segmentation aims to comprehensively utilize multiple RS modalities to assign pixel-level semantics to the studied scenes, which can provide a new perspective for global city understanding. Multimodal segmentation inevitably encounters the challenge of modeling intra- and inter-modal relationships, $i.e$ ., object diversity and modal gaps. However, the previous methods are usually designed for a single RS modality, limited by the noisy collection environment and poor discrimination information. Neuropsychology and neuroanatomy confirm that the human brain performs the guiding perception and integrative cognition of multimodal semantics through intuitive reasoning. Therefore, establishing a semantic understanding framework inspired by intuition to realize multimodal RS segmentation becomes the main motivation of this work. Drived by the superiority of hypergraphs in modeling high-order relationships, we propose an intuition-inspired hypergraph network ( $I^{2}HN$ ) for multimodal RS segmentation. Specifically, we present a hypergraph parser to imitate guiding perception to learn intra-modal object-wise relationships. It parses the input modality into irregular hypergraphs to mine semantic clues and generate robust mono-modal representations. In addition, we also design a hypergraph matcher to dynamically update the hypergraph structure from the explicit correspondence of visual concepts, similar to integrative cognition, to improve cross-modal compatibility when fusing multimodal features. Extensive experiments on two multimodal RS datasets show that the proposed $I^{2}HN$ outperforms the state-of-the-art models, achieving F1/mIoU accuracy 91.4%/82.9% on the ISPRS Vaihingen dataset, and 92.1%/84.2% on the MSAW dataset.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
追寻依波完成签到,获得积分10
2秒前
情怀应助反方向的钟采纳,获得10
2秒前
wanci应助科研通管家采纳,获得30
5秒前
天天快乐应助科研通管家采纳,获得10
5秒前
Akim应助科研通管家采纳,获得10
5秒前
英俊的铭应助科研通管家采纳,获得10
5秒前
传奇3应助科研通管家采纳,获得10
5秒前
wanci应助科研通管家采纳,获得10
5秒前
李健应助科研通管家采纳,获得10
5秒前
乐乐应助科研通管家采纳,获得10
5秒前
无花果应助Microwhale采纳,获得10
5秒前
共享精神应助科研通管家采纳,获得10
5秒前
5秒前
完美世界应助科研通管家采纳,获得10
6秒前
爆米花应助科研通管家采纳,获得10
6秒前
英姑应助科研通管家采纳,获得10
6秒前
传奇3应助科研通管家采纳,获得10
6秒前
pizwijrit发布了新的文献求助10
6秒前
Hello应助科研通管家采纳,获得10
6秒前
6秒前
Owen应助科研通管家采纳,获得10
6秒前
田様应助科研通管家采纳,获得10
6秒前
6秒前
zhu笑笑完成签到,获得积分10
9秒前
长情孤晴发布了新的文献求助10
10秒前
11秒前
上官若男应助YJ采纳,获得10
13秒前
科研通AI6.3应助灵灵灵采纳,获得10
15秒前
15秒前
科研通AI6.3应助LUCKYLI_QIAN采纳,获得10
16秒前
Lanyx发布了新的文献求助10
16秒前
17秒前
17秒前
18秒前
20秒前
20秒前
柚子想吃橘子完成签到,获得积分10
21秒前
澳bobo发布了新的文献求助10
22秒前
不晚发布了新的文献求助10
22秒前
高分求助中
Modern Epidemiology, Fourth Edition 5000
Kinesiophobia : a new view of chronic pain behavior 5000
Molecular Biology of Cancer: Mechanisms, Targets, and Therapeutics 3000
Digital Twins of Advanced Materials Processing 2000
Propeller Design 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Handbook of pharmaceutical excipients, Ninth edition 1500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 纳米技术 化学工程 生物化学 物理 计算机科学 内科学 复合材料 催化作用 物理化学 光电子学 电极 冶金 细胞生物学 基因
热门帖子
关注 科研通微信公众号,转发送积分 6011588
求助须知:如何正确求助?哪些是违规求助? 7562048
关于积分的说明 16137362
捐赠科研通 5158412
什么是DOI,文献DOI怎么找? 2762785
邀请新用户注册赠送积分活动 1741552
关于科研通互助平台的介绍 1633669