人工智能
计算机科学
计算机视觉
姿势
分割
对象(语法)
分拆(数论)
任务(项目管理)
利用
工程类
数学
计算机安全
组合数学
系统工程
作者
Xiaohan Li,Xiaozhen Zhang,Xiang Zhou,I‐Ming Chen
标识
DOI:10.1016/j.knosys.2023.110491
摘要
Robotic grasping has the challenge of accurately extracting the graspable target from a complicated scenario. To address the issue, we propose a 3D vision prediction framework including visual observation and pose estimation. Firstly, we exploit the continuity characteristics in the U-disparity map to identify the isolated objects and occluded objects which can quickly partition the grasping scene and produce valid candidate regions for grasping. Secondly, an end-to-end approach based on PointNet++ is improved to obtain the topmost target if there is a pile of stacked objects. We also provide a robust labeling method for generating the datasets comprising the multi-object scenes. Moreover, a designed evaluation criterion is presented to assist with estimating the 6-DOF (degree of freedom) pose. Our method UPG (U-disparity and PointNet++ grasping) simplifies the segmentation task and makes the training model lightweight in order to apply in practical bin-picking and assembly. To validate the feasibility, UPG is evaluated on simulation and real-world scenes, respectively. The extensive results indicate that UPG can achieve better segmentation accuracy and grasping success rates against other state-of-the-arts.
科研通智能强力驱动
Strongly Powered by AbleSci AI