杂乱
计算机科学
人工智能
机器人
计算机视觉
对象(语法)
目标检测
姿势
RGB颜色模型
模式识别(心理学)
雷达
电信
作者
Tong Li,Jing An,Kai Yang,Gang Chen,Yifan Wang
标识
DOI:10.1109/iciea54703.2022.10005947
摘要
Considering the diversity and stack of objects in clutter, an efficient network is constructed for grasping pose generation by limiting the recognition range of grasping pose estimation and simplifying the grasping network structure. Specifically, the RGB images and robot grasping tasks are sent to the Retinanet grasping target detection module simultaneously, which to identify the grasping object type and locate the grasping keypoint location, then the localized data containing only the grasping object would be sent to the grasping angle estimation module to predict the grasping angle, finally, get the robot grasping pose of the robot. Further, the VMRD dataset is improved for target-oriented grasping, and experiments are carried out to verify the precision and efficiency of the network. With the proposed network, the speed of grasping pose generation can reach 11.4FPS, and the precision of pose estimation is improved by 11.9% over the baseline model.
科研通智能强力驱动
Strongly Powered by AbleSci AI