高光谱成像
人工智能
平滑的
模式识别(心理学)
计算机科学
特征学习
相似性(几何)
机器学习
特征(语言学)
噪音(视频)
特征提取
图像(数学)
计算机视觉
语言学
哲学
作者
Liu Li-zhu,Hui Zhang,Yaonan Wang
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:73: 1-14
被引量:1
标识
DOI:10.1109/tim.2024.3406811
摘要
Semi-supervised learning has become an effective paradigm for reducing the reliance of hyperspectral image (HSI) classification on labeled data. State-of-the-art semi-supervised HSI classification methods learn supplementary knowledge from pseudo-labels, which are predicted by a deep learning model on unlabeled data. Nevertheless, these methods usually overlook the impacts of pseudo-label noise, intra-class spectral variability, and inter-class spectral similarity, which may fundamentally constrain the model's capability for refining feature representation. To address these prevalent issues, we propose a novel semi-supervised framework - contrastive mutual learning with pseudo-label smoothing (CMLP) to enable the model to learn more refined features. Firstly, we uniquely combine a mutual learning model and pseudo-label smoothing strategy to reduce the noise knowledge learned by the classification model during HSI feature extraction. Secondly, we incorporate a mutual pseudo-label guided contrastive learning approach, which helps to maximize interclass dispersion and minimize intraclass compactness, thus mitigating the problem of intra-class spectral variability and inter-class spectral similarity within HSI data. In addition, we have introduced a dynamic threshold strategy that adjusts the quantity of unlabeled samples introduced during the training process dynamically. This strategy mitigates the adverse impact from unstable predictions of unlabeled data in the early stages of training. The extensive experiments on three benchmark HSI datasets demonstrate that the proposed method can achieve competitive performance compared to state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI