聚类分析
子空间拓扑
离群值
计算机科学
矩阵分解
数学
算法
矩阵范数
人工智能
模式识别(心理学)
特征向量
量子力学
物理
标识
DOI:10.1016/j.neunet.2022.09.007
摘要
Self-representation based subspace learning has shown its effectiveness in many applications, but most existing methods do not consider the difference between different views. As a result, the learned self-representation matrix cannot well characterize the clustering structure. Moreover, some methods involve an undesired weighted vector of the tensor nuclear norm, which reduces the flexibility of the algorithm in practical applications. To handle these problems, we present a tensorized multi-view subspace clustering. Specifically, our method employs matrix factorization and decomposes the self-representation matrix to orthogonal projection matrix and affinity matrix. We also add ℓ1,2-norm regularization on affinity representation to characterize the cluster structure. Moreover, the proposed method uses weighted tensor Schatten p-norm to explore higher-order structure and complementary information embedded in multi-view data, which can allocate the ideal weight for each view automatically without additional weight and penalty parameters. We apply the adaptive loss function to the model to maintain the robustness to outliers and efficiently learn the data distribution. Extensive experimental results on different datasets reveal that our method is superior to other state-of-the-art multi-view subspace clustering methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI