抓住
水准点(测量)
人工智能
计算机科学
基本事实
一般化
集合(抽象数据类型)
计算机视觉
RGB颜色模型
可卷曲的尾巴
计算
代表(政治)
对象(语法)
编码(集合论)
机器学习
数学
数学分析
古生物学
大地测量学
算法
政治
政治学
法学
生物
程序设计语言
地理
作者
Hao-Shu Fang,Minghao Gou,Chenxi Wang,Cewu Lu
标识
DOI:10.1177/02783649231193710
摘要
Robust object grasping in cluttered scenes is vital to all robotic prehensile manipulation. In this paper, we present the GraspNet-1Billion benchmark that contains rich real-world captured cluttered scenarios and abundant annotations. This benchmark aims at solving two critical problems for the cluttered scenes parallel-finger grasping: the insufficient real-world training data and the lacking of evaluation benchmark. We first contribute a large-scale grasp pose detection dataset. Two different depth cameras based on structured-light and time-of-flight technologies are adopted. Our dataset contains 97,280 RGB-D images with over one billion grasp poses. In total, 190 cluttered scenes are collected, among which 100 are training set and 90 are for testing. Meanwhile, we build an evaluation system that is general and user-friendly. It directly reports a predicted grasp pose’s quality by analytic computation, which is able to evaluate any kind of grasp representation without exhaustively labeling the ground-truth. We further divide the test set into three difficulties to better evaluate algorithms’ generalization ability. Our dataset, accessing API and evaluation code, are publicly available at www.graspnet.net.
科研通智能强力驱动
Strongly Powered by AbleSci AI