计算机科学
人工智能
判别式
模式识别(心理学)
匹配(统计)
特征(语言学)
光学(聚焦)
上下文图像分类
特征提取
特征学习
特征向量
领域(数学分析)
高光谱成像
计算机视觉
图像(数学)
数学
哲学
数学分析
物理
光学
统计
语言学
作者
Yujie Ning,Jiangtao Peng,Quanyong Liu,Yi Huang,Weiwei Sun,Qian Du
标识
DOI:10.1109/tgrs.2023.3295357
摘要
Cross-scene hyperspectral image classification (HSIC) is a challenging topic in remote sensing, especially when there are no labels in target domain. Domain adaptation (DA) techniques for cross-scene HSIC aim to label a target domain by associating it with a labeled source domain. Most existing DA methods learn domain-invariant features by reducing feature distance across domains. Recently, contrastive learning has shown excellent performance in computer vision tasks, but there is little or no research on the performance of cross-scene HSIC. Considering that its idea is similar to reducing feature distance, this paper attempts to explore whether contrastive learning can achieve cross-scene HSIC. In this work, an instance-to-instance contrastive learning framework based on category matching (CLCM) is designed. The main idea is to take the category information as the premise in the feature space, regard the source sample as an anchor, and find its positive and negative matching samples across domains. The instance-level discriminative feature embeddings are learned through positive matching pairs attracting each other and negative matching pairs repelling each other. Among them, the target label is a pseudo-label. To further improve the quality of contrastive learning, it is considered to focus on extracting the spectral-spatial features of HSI to more accurately represent semantic information. Simultaneously, high-confidence target samples are screened to update the network. Three DA tasks confirm the effectiveness and feature discriminativeness of CLCM, while also providing new ideas for cross-scene image classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI