自编码
聚类分析
计算机科学
潜变量
嵌入
云计算
人工智能
代表(政治)
空格(标点符号)
潜变量模型
可解释性
机器学习
数据挖掘
深度学习
操作系统
法学
政治
政治学
作者
Yue Liu,Zitu Liu,Shuang Li,Zhenyao Yu,Yike Guo,Qun Li,Guoyin Wang
标识
DOI:10.1016/j.patcog.2023.109530
摘要
Variational Autoencoder (VAE) has been widely and successfully used in learning coherent latent representation of data. However, the lack of interpretability in the latent space constructed by the VAE under the prior distribution is still an urgent problem. This paper proposes a VAE with understandable concept embedding named Cloud-VAE, which constructs interpretable latent space by disentangling the latent variables and considering their uncertainty based on cloud model. Firstly, cloud model-based clustering algorithm cast initial constraint of latent space into a prior distribution of concept which can be embedded into the latent space of the VAE to disentangle the latent variables. Secondly, reparameterization trick based on forward cloud transformation algorithm is designed to estimate the latent space concept by increasing the randomness of latent variables. Furthermore, variational lower bound of Cloud-VAE is derived to guide the training process to construct concepts of latent space, realizing the mutual mapping between latent space and concept space. Finally, experimental results on 6 benchmark datasets show that Cloud-VAE has good clustering and reconstruction performance, which can explicitly explain the aggregation process of the model and discover more interpretable disentangled representations.
科研通智能强力驱动
Strongly Powered by AbleSci AI