特征提取
计算机科学
模式识别(心理学)
人工智能
特征(语言学)
图像纹理
遥感
纹理(宇宙学)
上下文图像分类
图像分割
地质学
图像(数学)
哲学
语言学
作者
Zhengyi Xu,Wen Jiang,Jie Geng
标识
DOI:10.1109/tgrs.2024.3368091
摘要
The pixel-level classification of multimodal remote sensing images plays a crucial role in the intelligent interpretation of remote sensing data. However, existing methods that mainly focus on feature interaction and fusion fail to address the challenges posed by confounders – brought by sensor imaging bias, limiting their performance. In this paper, we introduce causal inference into intelligent interpretation of remote sensing and propose a new Texture-Aware Causal Feature Extraction Network (TeACFNet) for pixel-level fusion classification. Specifically, we propose a two-stage causal feature extraction framework that helps networks learn more explicit class representations by capturing the causal relationships between multimodal heterogeneous data. In addition, to solve the problem of low-resolution land cover feature representation in remote sensing images, we propose the Refined Statistical Texture Extraction (ReSTE) module. This module integrates the semantics of statistical textures in shallow feature maps through feature refinement, quantization, and encoding. Extensive experiments on two publicly available datasets with different modalities, including Houston 2013 and Berlin datasets, demonstrate the remarkable effectiveness of our proposed method, which reaches a new state-of-the-art.
科研通智能强力驱动
Strongly Powered by AbleSci AI