聚类分析
对偶(语法数字)
计算机科学
人工智能
模式识别(心理学)
艺术
文学类
作者
Jintang Bian,Yixiang Lin,Xiaohua Xie,Chang‐Dong Wang,Lingxiao Yang,Jianhuang Lai,Feiping Nie
标识
DOI:10.1109/tnnls.2025.3552969
摘要
Multiview clustering (MVC) aims to integrate multiple related but different views of data to achieve more accurate clustering performance. Contrastive learning has found many applications in MVC due to its successful performance in unsupervised visual representation learning. However, existing MVC methods based on contrastive learning overlook the potential of high similarity nearest neighbors as positive pairs. In addition, these methods do not capture the multilevel (i.e., cluster, instance, and prototype levels) representational structure that naturally exists in multiview datasets. These limitations could further hinder the structural compactness of learned multiview representations. To address these issues, we propose a novel end-to-end deep MVC method called multilevel contrastive MVC (MCMC) with dual self-supervised learning (DSL). Specifically, we first treat the nearest neighbors of an object from the latent subspace as the positive pairs for multiview contrastive loss, which improves the compactness of the representation at the instance level. Second, we perform multilevel contrastive learning (MCL) on clusters, instances, and prototypes to capture the multilevel representational structure underlying the multiview data in the latent space. In addition, we learn consistent cluster assignments for MVC by adopting a DSL method to associate different level structural representations. The evaluation experiment showed that MCMC can achieve intracluster compactness, intercluster separability, and higher accuracy (ACC) in clustering performance. Our code is available at https://github.com/bianjt-morning/MCMC.
科研通智能强力驱动
Strongly Powered by AbleSci AI