计算机科学
全色胶片
统一
人工智能
高光谱成像
多光谱图像
利用
模式识别(心理学)
特征提取
数据挖掘
计算机安全
程序设计语言
作者
Yu Han,Hao Zhu,Licheng Jiao,Xiaoyu Yi,Xiaotong Li,Biao Hou,Wenping Ma,Shuang Wang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:61: 1-15
标识
DOI:10.1109/tgrs.2023.3321729
摘要
The rapid progress in remote sensing technology has made it convenient for satellites to capture both multispectral (MS) and panchromatic (PAN) images. MS has more spectral information, and PAN has higher spatial resolution. How to exploit the complementarity between MS and PAN images, and effectively combine their respective advantageous features while alleviating mode differences, has become a crucial research task. This paper designs a Style Separation and Mode Unification network (SSMU-Net) for MS and PAN image classification from a novel and effective perspective. The network can be divided into two stages: style separation and mode unification. In the style separation stage, we use wavelet decomposition and techniques similar to generative adversarial networks to preliminarily separate the information of MS and PAN into different components. These components better preserve complete information from the original data and have their own advantages in style and content. Then we propose a Symmetrical Triplet Traction module to perform style traction on different components, making style features more unique and content features more unified, achieving feature separation and purification. In the mode unification stage, we design an encoder-decoder model to reduce the impact of mode differences. The experimental results from multiple datasets validate the effectiveness of our proposed method. Our overall accuracy improved by approximately 4% on the Shanghai and Beijing datasets, and it has exceeded 99.28% on the Hohhot and Vancouver datasets. Our code is available at: https://github.com/proudpie/SSMU-Net.
科研通智能强力驱动
Strongly Powered by AbleSci AI