人工智能
果园
油茶
聚类分析
最小边界框
计算机科学
计算机视觉
跳跃式监视
模式识别(心理学)
数学
图像(数学)
园艺
生物
作者
Yunchao Tang,Hao Zhou,Hongjun Wang,Yunqi Zhang
标识
DOI:10.1016/j.eswa.2022.118573
摘要
In the complex environment of an orchard, changes in illumination, leaf occlusion, and fruit overlap make it challenging for mobile picking robots to detect and locate oil-seed camellia fruit. To address this problem, YOLO-Oleifera was developed as a fruit detection model method based on a YOLOv4-tiny model, To obtain clustering results appropriate to the size of the Camellia oleifera fruit, the k-means++ clustering algorithm was used instead of the k-means clustering algorithm used by the YOLOv4-tiny model to determine bounding box priors. Two convolutional kernels of 1 × 1 and 3 × 3 were respectively added after the second and third CSPBlock modules of the YOLOv4-tiny model. This model allows the learning of Camellia oleifera fruit feature information and reduces overall computational complexity. Compared with the classic stereo matching method based on binocular camera images, this method innovatively used the bounding box generated by the YOLO-Oleifera model to extract the region of interest of the fruit, and then adaptively performs stereo matching according to the generation mechanism of the bounding box. This allows the determination of disparity and facilitates the subsequent use of the triangulation principle to determine the picking position of the fruit. An ablation experiment demonstrated the effective improvement of the YOLOv4-tiny model. Camellia oleifera fruit images obtained under sunlight and shading conditions were used to test the YOLO-Oleifera model, and the model robustly detected the fruit under different illumination conditions. Occluded Camellia oleifera fruit decreased precision and recall due to the loss of semantic information. Comparison of this model with deep learning models YOLOv5-s,YOLOv3-tiny, and YOLOv4-tiny, the YOLO-Oleifera model achieved the highest AP of 0.9207 with the smallest data weight of 29 MB. The YOLO-Oleifera model took an average of 31 ms to detect each fruit image, fast enough to meet the demand for real-time detection. The algorithm exhibited high positioning stability and robust function despite changes in illumination. The results of this study can provide a technical reference for the robust detection and positioning of Camellia oleifera fruit by a mobile picking robot in a complex orchard environment.
科研通智能强力驱动
Strongly Powered by AbleSci AI