A comparison between Pixel-based deep learning and Object-based image analysis (OBIA) for individual detection of cabbage plants based on UAV Visible-light images
It is challenging to accurately and rapidly extract crops based on the ultra-high spatial resolution images of uncrewed aerial vehicle (UAV). Object-based image analysis (OBIA) was regarded as an effective technique for high-spatial-resolution image classification because of its ability to achieve high accuracy by integrating multi-dimensional features. In recent years, deep learning (DL) techniques, with their ability to automatically learn image features from a large number of images, have shown great potential for crop monitoring. However, a systematic comparison of these two mainstream methods for monitoring the crop phenotype has not been conducted. Therefore, this study compares the performance of two advanced methods, DL and OBIA, in individual cabbage plant detection tasks. The results show that the Mask R-CNN deep learning model outperforms the object-based image analysis-multilevel distance transform watershed segmentation (OBIA-MDTWS) method in crop extraction and counting, with an overall mean F1-Score, accuracy of 2.70, 4.15 percentage points higher, respectively. Moreover, the Mask R-CNN deep learning model has higher computing efficiency, which is 3.74 times higher than the OBIA-MDTWS model. In summary, this study shows that the Mask R-CNN deep learning model performs better in vegetable extraction and quantity estimation, providing technical support for subsequent field nursery management and fine planting.