抓住
人工智能
计算机科学
计算机视觉
卷积神经网络
矩形
探测器
目标检测
特征(语言学)
特征提取
任务(项目管理)
机器人
模式识别(心理学)
工程类
数学
电信
哲学
语言学
程序设计语言
系统工程
几何学
作者
Yongxiang Wu,Fuhai Zhang,Yili Fu
出处
期刊:IEEE Transactions on Industrial Electronics
[Institute of Electrical and Electronics Engineers]
日期:2021-12-21
卷期号:69 (12): 13171-13181
被引量:22
标识
DOI:10.1109/tie.2021.3135629
摘要
Robotic grasping is essential for intelligent manufacturing. This article presents a novel anchor-free grasp detector based on fully convolutional network for detecting multiple valid grasps from RGB-D images in real time. Grasp detection is formulated as a closest horizontal or vertical rectangle regression task and a grasp angle classification task. By directing predicting grasps at feature points, our method eliminates the predefined anchors that commonly used in prior methods, and thus anchor-related hyperparameters and complex computations are avoided. For suppressing ambiguous and low-quality training samples, a new sample assignment strategy that combines center-sampling and regression weights is proposed. Our method achieves a state-of-the-art accuracy of 99.4% on Cornell and 96.2% on Jacquard dataset, and real-time speed of 104 frames per second, with approximately 2× fewer parameters and 8× less training time compared to previous one-stage detector. Moreover, an efficient multiscale feature fusion module is integrated to improve the performance of multigrasp detection by 25%. In real-world robotic grasping of novel objects, our method achieves a grasp success rate of 91.3% for single object and 83.3% for multiple objects with only 26 ms used for the whole planning. The results demonstrate that our method is robust for potential industrial applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI