人工智能
计算机科学
模式识别(心理学)
特征提取
预处理器
鉴别器
分类器(UML)
可解释性
自编码
人工神经网络
计算机视觉
探测器
电信
作者
Ke Zhang,Wenning Hao,Xiaohan Yu,T. Shao,Qiuhui Shen
摘要
The interpretable image classifier VAE-FNN can extract high-level features for classification from complex image information and provide explanations that are consistent with human intuition. However, due to the insufficient reconstruction ability of VAE, there are still challenges in feature extraction and interpretable classification for highdefinition images. An image preprocessing method is proposed in this paper and a model named E2GAN that can extract low-dimensional interpretable features from high-definition images is constructed. The model is based on a pre-trained StyleGAN generator, and two mapping networks are trained, one for extracting the low-dimensional compressed encoding of the input image and the other for restoring it to the matrix representation required by the StyleGAN generator, which effectively improves the quality of feature extraction and image reconstruction. A discriminator is introduced to perform adversarial training with the mapping network, further improving the realism of the reconstructed image. The training algorithm of the E2GAN model is designed, and a decoupling loss for the low-dimensional encoding is added to further improve its semantic interpretability. Experiments on the CelebA-HQ dataset show that the E2GAN model can extract low-dimensional, semantically informative features from high-definition images, which can be used to train high-precision and interpretable fuzzy neural network classifiers.
科研通智能强力驱动
Strongly Powered by AbleSci AI