判别式
正规化(语言学)
人工智能
计算机科学
直觉
监督学习
标记数据
模式识别(心理学)
特征向量
机器学习
人工神经网络
认识论
哲学
作者
Nairouz Mrabah,Mohamed Bouguessa,Riadh Ksantini
标识
DOI:10.24963/ijcai.2022/465
摘要
Most recent graph clustering methods rely on pretraining graph auto-encoders using self-supervision techniques (pretext task) and finetuning based on pseudo-supervision (main task). However, the transition from self-supervision to pseudo-supervision has never been studied from a geometric perspective. Herein, we establish the first systematic exploration of the latent manifolds' geometry under the deep clustering paradigm; we study the evolution of their intrinsic dimension and linear intrinsic dimension. We find that the embedded manifolds undergo coarse geometric transformations under the transition regime: from curved low-dimensional to flattened higher-dimensional. Moreover, we find that this inappropriate flattening leads to clustering deterioration by twisting the curved structures. To address this problem, which we call Feature Twist, we propose a variational graph auto-encoder that can smooth the local curves before gradually flattening the global structures. Our results show a notable improvement over multiple state-of-the-art approaches by escaping Feature Twist.
科研通智能强力驱动
Strongly Powered by AbleSci AI