抓住
人工智能
计算机视觉
计算机科学
机器人
运动学
代表(政治)
物理
经典力学
政治
政治学
法学
程序设计语言
作者
Aniket Ghodake,Prakash Uttam,B. B. Ahuja
标识
DOI:10.1109/i4tech55392.2022.9952955
摘要
With autonomous grasping capabilities, robot manipulators will be able to grasp objects in a cluttered environment autonomously without human intervention. For that, we need to localize the robotic grasp configuration for each object in the scene, which will be provided to the inverse kinematics plugin to plan and execute the grasp. In this paper, we propose an end-to-end approach that predicts the 6-DOF grasp configuration, accepting the raw depth images from a 3D camera sensor such as the MS Kinect sensor. We designed a network that accepts the cropped Truncated Signed Distance Function (TSDF) representation of the scene about a point and its corresponding normal, and for that point, it predicts the yaw angle of a grasp configuration, its probability of success, and the offset of grasp along the normal. Here, we convert the 6-DOF grasp detection problem into a 2-DOF grasp representation problem, thereby reducing the dimensionality of our grasp representation, which enhances the learning process. Our approach can detect grasps at a point in only 60 milliseconds. In a simulated robotic grasping experiment in a cluttered environment, our model achieves over 89 % success rate, clearing almost all the objects in the scene. For this project, we used Python programming language for all the coding part, Robot Operating System (ROS) as middleware between various nodes, and Pybullet physics engine for data generation and performing simulated experiments.
科研通智能强力驱动
Strongly Powered by AbleSci AI