计算机科学
过度拟合
人工智能
样品(材料)
分类器(UML)
卷积神经网络
模式识别(心理学)
人工神经网络
机器学习
化学
色谱法
作者
Jiaqi Mi,Congcong Ma,Lihua Zheng,Man Zhang,Minzan Li,Minjuan Wang
标识
DOI:10.1016/j.eswa.2023.120943
摘要
The small-sample task is a current challenge in the field of deep learning, due to the huge annotation cost and the inherent limitations of targets, such as the acquisition of rare animal and plant images. Data augmentation is an effective method to solve the semantic sparseness and overfitting of deep convolution neural network in small-sample classification, but its effectiveness remains to be improved. We propose a Wasserstein GAN with confidence loss (WGAN-CL) to implement the expansion of small-sample plant dataset. Firstly, a shallower GAN's structure is designed to adapt to less plant data. Meanwhile, shortcut-stream connections are brought into the basic network to enlarge the solution space of the model without producing additional training parameters. Secondly, the Wasserstein distance combined with confidence loss is used for optimizing the model. Experiments demonstrate that the Wasserstein distance with gradient penalty guarantees the stability of model training and the diversity of outputs. And the sample screening strategy based on confidence loss can ensure that the generated image is close to the real image in semantic features, which is critical for subsequent image classification. To verify the effectiveness of the WGAN-CL in plant small-sample augmentation, 2000 flower images of 5 categories in the "Flowers" dataset are utilized as training samples, while 2000 augmented images are employed for model training as well to improve the performance of a classical classifier. WGAN-CL has a significant performance improvement over state-of-the-art technologies, i.e., a 2.2% improvement in recall and a 2% improvement in F1-score. Experiments on the "Plant Leaves" dataset also achieved excellent results demonstrating that WGAN-CL can be migrated to other tasks. WGAN-CL uses less computational resources while considering both effectiveness and robustness, proved the practicality of our model.
科研通智能强力驱动
Strongly Powered by AbleSci AI