作者
Dong Chen,Xinda Qi,Yu Zheng,Yuzhen Lu,Yanbo Huang,Zhaojian Li
摘要
Weed management plays an important role in crop yield and quality protection. Conventional weed control methods largely rely on intensive, blanket herbicide application, which incurs significant management costs and poses hazards to the environment and human health. Machine vision-based automated weeding has gained increasing attention for sustainable weed management through weed recognition and site-specific treatments. However, it remains a challenging task to reliably recognize weeds in variable field conditions, in part due to the difficulty curating large-scale, expert-labeled weed image datasets for supervised training of weed recognition algorithms. Data augmentation methods, including traditional geometric/color transformations and more advanced generative adversarial networks (GANs) can supplement data collection and labeling efforts by algorithmically expanding the scale of datasets. Recently, diffusion models have emerged in the field of image synthesis, providing a new means for augmenting image datasets to power machine vision systems. This study presents a novel investigation of the efficacy of diffusion models for generating weed images to enhance weed identification. Experiments on two public multi-class large weed datasets showed that diffusion models yielded the best trade-off between sample fidelity and diversity and obtained the highest Fréchet Inception Distance, compared to GANs (BigGAN, StyleGAN2, StyleGAN3). For instance, on a ten-class weed dataset (CottonWeedID10), the inclusion of synthetic weed images led to improvements by 1.17% (97.30% to 98.47), 1.21% (97.92% to 99.13%), and 2.30% (96.06% to 98.27%) in accuracy, precision, and recall, respectively, in weed classification by four deep learning models (i.e., VGG16, Inception-v3, Inception-v3, and ResNet50). Models trained using only 10% of real images with the remainder being synthetic data resulted in testing accuracy exceeding 94%.