期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers] 日期:2024-01-01卷期号:62: 1-13被引量:1
标识
DOI:10.1109/tgrs.2024.3406690
摘要
Recently, deep learning presents a promising performance in the joint classification of multimodal remote sensing (RS) data. However, most of the approaches adopt a supervised learning manner, where the discrimination capability is limited by the paucity of labeled samples. Though some attempts have been made to develop semi-supervised methods, they prefer to select the highly confident predictions as pseudo ground truth and discard those unreliable ones. Actually, unreliable samples can also provide useful information, e.g., indicating the categories to which samples may belong and definitely not belong. Focused on this, a novel uncertainty-aware contrastive learning (UACL) method is proposed. Here, label uncertainty analysis based on multi-level probability estimation is first conducted to separate reliable and unreliable samples, which are then processed with a designed hybrid ("hard" or "soft") contrastive learning (CL) strategy. For reliable samples, the "hard" CL pushes the network to learn features that will minimize the intra-class distance while maximizing the inter-class distance, according to the pseudo-labels. For unreliable samples, the "soft" CL aims to learn the similarity and difference among samples, where the predicted class probabilities are queried to estimate a soft mask for an adaptive feature similarity measurement. Moreover, a multimodal spectral-spatial joint feature representation pipeline of triple branches, i.e., one spectral branch for hyperspectral images (HSIs) and two spatial branches for multimodal data, is also introduced. By jointly learning from both labeled and unlabeled samples, more discriminative spectral-spatial feature representation will lead to a further boost in classification performance. Extensive experiments on four well-known multimodal datasets prove the effectiveness of the proposed semi-supervised classification method. Codes are available at https://github.com/Ding-Kexin/UACL.