可解释性
自编码
计算机科学
特征学习
人工智能
特征(语言学)
人工神经网络
深度学习
玻尔兹曼机
图形
先验概率
代表(政治)
机器学习
模式识别(心理学)
理论计算机科学
贝叶斯概率
政治学
法学
哲学
语言学
政治
作者
Qiao Ke,Xinhui Jing,Marcin Woźniak,Shuang Xu,Yunji Liang,Jiangbin Zheng
标识
DOI:10.1016/j.ins.2023.119903
摘要
Neural networks are used to learn task-oriented high-level representations in an end-to-end manner by building a multi-layer neural network. Generation models have developed rapidly with the emergence of deep neural networks. But it still has problems with the insufficient authenticity of generated images, the deficiency of diversity, consistency, and unexplainability in the generation process. Disentangled representation is an effective method to learn a high-level feature representation and realize the interpretability of deep neural networks. We propose a general disentangled representation learning network with variational autoencoder network as the basic framework for the image generation process. The graph-based structure of the priors is embedded in the last module of the deep encoder network to build the feature spaces by the class, task-oriented, and task-unrelated information respectively. Meanwhile the priors should be adaptively modified with the task relevance of a generated image. And the semi-supervised learning is further involved in the disentangled representation network framework to reduce the requirements of label and extend the majority of feature space under the task-unrelated feature assumption. Experimental results show that the proposed method is efficient for various types of images and has a good potential for further research and development.
科研通智能强力驱动
Strongly Powered by AbleSci AI