计算机科学
人工智能
生成对抗网络
计算机图形学
模式识别(心理学)
图像(数学)
生成语法
机器学习
对抗制
作者
Lei Wang,Yu Sun,Zheng Wang
标识
DOI:10.1007/s00371-021-02262-8
摘要
Generative adversarial network (GAN) has been successfully extended to solve semi-supervised image classification tasks recently. However, it is still a great challenge for GAN to exploit the unlabeled images for boosting its classification ability when labeled images are very limited. In this paper, we propose a novel CCS-GAN model for semi-supervised image classification, which aims to improve its classification ability by utilizing the cluster structure of unlabeled images and ’bad’ generated images. Specifically, it employs a new cluster consistency loss to constrain its classifier to keep the local discriminative consistency in each cluster of unlabeled images and thus provides implicit supervised information to boost the classifier. Meanwhile, it adopts an enhanced feature matching approach to encourage its generator to produce adversarial images from the low-density regions of real distribution, which can enhance the discriminative ability of the classifier during adversarial training and suppress the mode collapse problem. Extensive experiments on four benchmark datasets show that: the proposed CCS-GAN achieves very competitive performance in semi-supervised image classification tasks when compared with several state-of-the-art competitors.
科研通智能强力驱动
Strongly Powered by AbleSci AI