计算机科学
人工智能
卷积神经网络
模式识别(心理学)
域适应
特征(语言学)
代表(政治)
领域(数学分析)
适应(眼睛)
特征提取
特征向量
上下文图像分类
分类器(UML)
图像(数学)
数学
数学分析
哲学
物理
光学
政治学
法学
政治
语言学
作者
Ben Niu,Zongxu Pan,Jixiang Wu,Yuxin Hu,Bin Lei
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-19
被引量:5
标识
DOI:10.1109/tgrs.2022.3217180
摘要
In recent years, convolutional neural networks (CNNs) have made significant progress in remote sensing scene classification (RSSC) tasks. Because obtaining a large number of labeled images is time-consuming and expensive and the generalization ability of supervised models is limited, domain adaptation is widely introduced into RSSC. However, existing adaptation approaches mainly aim to align the distribution of features in a single representation space, which results in losing information and limiting the spatial range for extracting domain-invariant features. In addition, some of the methods simultaneously align pixel-level (local) and image-level (global) features for better results but suffer from searching for the best weight of the two parts manually, which is time-consuming and computing-expensive. To overcome the above issues, a novel feature fusion-and-alignment approach named Multi-Representation Dynamic Adaption Network (MRDAN) is proposed for cross-domain RSSC. Concretely, a Feature-Fusion Adaptation (FFA) module is embedded into the network, which maps samples to multiple representations and fuses them to obtain a broader domain-invariant feature space. Based on this hybrid space, we introduce a cross-domain Dynamic Feature-Alignment Mechanism (DFAM) to quantitatively evaluate and adjust the relative importance of the local and global adaptation losses during domain adaptation. The experimental results on the 12 transfer tasks between the UC Merced land-use, WHU-RS19, AID, and RSSCN7 data sets demonstrate the effectiveness of the proposed MRDAN over the state-of-the-art domain adaptation methods in RSSC.
科研通智能强力驱动
Strongly Powered by AbleSci AI