人工智能
计算机视觉
计算机科学
抓住
姿势
点云
分割
机器人
对象(语法)
RGB颜色模型
特征(语言学)
特征提取
语言学
哲学
程序设计语言
作者
Jie Luo,Xiaofeng Zhong,Chaoquan Shi,Guizhi Yang,Jing Zhao,Min Xu
标识
DOI:10.23919/ccc58697.2023.10240934
摘要
The existing robot grasping methods usually consist of end-to-end network channels. These networks not only train and predict multiple grasping attributes at once, but even perform instance segmentation of objects in the scene simultaneously, which limits the complexity of the grasping scene and fails to show satisfactory performance in some severely occluded or stacked scenes. This paper completes the instance segmentation of unknown objects in the stacking scene on RGB images, transforms the stacking scene into a single-object grasp problem by combining mask and depth map, and introduces a residual module with bottleneck layer into the grasping network, which reduces the amount of input data and improves the feature extraction ability, and finally completes the 7-DoF pose estimation in point cloud. As a result, the prediction dimension required by grasping network is reduced and the accuracy of grasping pose discrimination has been improved. A large number of real robot experiments show that our approach performs well in the stacked environment of unknown objects.
科研通智能强力驱动
Strongly Powered by AbleSci AI