开枪
增采样
计算机科学
人工智能
计算机视觉
模式识别(心理学)
数学
园艺
图像(数学)
生物
作者
Luyu Shuai,Jiong Mu,Xueqin Jiang,Peng Chen,Boda Zhang,Hongdan Li,Yuchao Wang,Zhiyong Li
标识
DOI:10.1016/j.biosystemseng.2023.06.007
摘要
Accurate detection of tea shoots and precise location of picking points are prerequisites for automated, intelligent and accurate tea picking. A method was developed for the detection of tea shoots and key points and the localisation of picking points in complex environments. Images of four types of tea shoots were collected from multiple fields of view in a tea plantation over two months and labelling criteria were established. The YOLO-Tea model was developed based on the YOLOv5 network model, which uses a content-based upsampling operator (CARAFE) with a larger field of perception to implement the tea shoot feature upsampling operation, adds a convolutional attention mechanism module (CBAM) to focus the model on both channel and spatial dimensions to detect and localise important areas of tea shoots in a large field of view. The Bottleneck Transformers module was used to inject global self-focus for residuals to create long-distance dependencies on the tea shot feature images, and a six-point landmark regression head was added. The experimental results demonstrated that the YOLO-Tea model improved the mean Average Precision (mAP) value of tea shoots and their key points by 5.26% compared to YOLOv5. Finally, we use image processing methods to locate picking point positions based on key point information during the model inference phase. This study has theoretical and practical implications for the detection of tea shoots and their key points, tea shoot alignment, phenotype identification, pose estimation and picking locations of premium teas in complex environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI