MTA-YOLACT: Multitask-aware network on fruit bunch identification for cherry tomato robotic harvesting

花梗 人工智能 轮廓 分割 计算机科学 任务(项目管理) 模式识别(心理学) 园艺 生物 工程类 计算机图形学(图像) 系统工程
作者
Yajun Li,Qingchun Feng,Cheng Liu,Zicong Xiong,Yuhuan Sun,Feng Xie,Tao Li,Chunjiang Zhao
出处
期刊:European Journal of Agronomy [Elsevier]
卷期号:146: 126812-126812 被引量:45
标识
DOI:10.1016/j.eja.2023.126812
摘要

Accurate and rapid perception of fruit bunch posture is necessary for the cherry tomato harvesting robot to successfully achieve the bunch’s holding and separating. According to the postural relationship of the fruit bunch, bunch pedicel, and plant’ main-stem, the robotic end-effector’s holding region and approach path could be determined, which were important for successful picking operation. The main goal of this research was to propose a multitask-aware network (MTA-YOLACT), which simultaneously performed region detection on fruit bunch, and region segmentation on pedicel and main-stem. The MTA-YOLACT extended from the pre-trained YOLACT model, included two detector branch networks for detection and instance segment, which shared the same backbone network, and the loss function with weighting coefficients of the two branches was adopted to balance the multi-task learning, according to multi-task’s homoscedastic uncertainty during the model training. Furthermore, in order to cluster the fruit bunch, pedicel and main-stem from the same bunch target, a classification and regression tree (CART) model was built, based on the region’s positional relationship from the MTA-YOLACT output. An image dataset of cherry tomato plants in China greenhouse was built to training and test the model. The results indicated a promising performance of the proposed network, with an F1-score of 95.4% on detecting fruit bunches and the mean Average Precision of 38.7% and 51.9% on the instance segmentation of pedicel and main-stem, which was 1.1% and 3.5% more than original YOLACT. Beyond that, our approach performed a real-time detection and instance segmentation of 13.3 frames per second (FPS). The whole bunch could be identified by the CART model with an average accuracy of 99.83% and the time cost of 9.53 ms. These results demonstrated the research could be a viable support to the harvesting robot’s vision unit development and the end-effector’s motion planning in the future research.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
wei官人完成签到,获得积分10
刚刚
小米完成签到,获得积分10
刚刚
1秒前
momo发布了新的文献求助10
1秒前
1秒前
桐桐应助han采纳,获得20
2秒前
東風发布了新的文献求助10
2秒前
3秒前
科研通AI2S应助qaq123采纳,获得10
3秒前
丘奇发布了新的文献求助10
4秒前
西奥完成签到 ,获得积分10
5秒前
量子星尘发布了新的文献求助10
5秒前
6秒前
11发布了新的文献求助10
6秒前
赘婿应助乐之采纳,获得10
7秒前
7秒前
8秒前
标致荷花发布了新的文献求助10
8秒前
量子星尘发布了新的文献求助10
9秒前
9秒前
9秒前
舒服的西装完成签到,获得积分10
9秒前
9秒前
陶醉的梦岚完成签到,获得积分10
10秒前
科研笨猪发布了新的文献求助10
11秒前
zyboat发布了新的文献求助10
12秒前
14秒前
better发布了新的文献求助10
14秒前
借一颗糖发布了新的文献求助30
14秒前
孤独幻枫发布了新的文献求助10
14秒前
传奇3应助April采纳,获得10
14秒前
小蘑菇应助科研通管家采纳,获得10
14秒前
天天快乐应助科研通管家采纳,获得10
14秒前
星辰大海应助科研通管家采纳,获得10
14秒前
科研通AI2S应助科研通管家采纳,获得10
15秒前
BowieHuang应助科研通管家采纳,获得10
15秒前
15秒前
15秒前
15秒前
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
从k到英国情人 1500
Ägyptische Geschichte der 21.–30. Dynastie 1100
„Semitische Wissenschaften“? 1100
Russian Foreign Policy: Change and Continuity 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5729500
求助须知:如何正确求助?哪些是违规求助? 5318746
关于积分的说明 15316776
捐赠科研通 4876514
什么是DOI,文献DOI怎么找? 2619398
邀请新用户注册赠送积分活动 1568923
关于科研通互助平台的介绍 1525513