网(多面体)
计算机科学
人工智能
计算机视觉
数学
几何学
作者
Hui Zhang,Jianzhi Lyu,Chuangchuang Zhou,Hongzhuo Liang,Yuyang Tu,Fuchun Sun,Jianwei Zhang
出处
期刊:IEEE transactions on cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2025-01-01
卷期号:: 1-14
标识
DOI:10.1109/tcyb.2024.3518975
摘要
In this article, a novel simulation-to-real (sim2real) multimodal learning framework is proposed for adaptive dexterous grasping and grasp status prediction. A two-stage approach is built upon the Isaac Gym and several proposed pluggable modules, which can effectively simulate dexterous grasps with multimodal sensing data, including RGB-D images of grasping scenarios, joint angles, 3-D tactile forces of soft fingertips, etc. Over 500K multimodal synthetic grasping scenarios are collected for neural network training. An adaptive dexterous grasping neural network (ADG-Net) is trained to learn dexterous grasp principles and predict grasp parameters, employing an attention mechanism and a graph convolutional neural network module to fuse multimodal information. The proposed adaptive dexterous grasping method can detect feasible grasp parameters from an RGB-D image of a grasp scene and then optimize grasp parameters based on multimodal sensing data when the dexterous hand touches a target object. Various experiments in both simulation and physical grasps indicate that our ADG-Net grasping method outperforms state-of-the-art grasping methods, achieving an average success rate of 92% for grasping isolated unseen objects and 83% for stacked objects. Code and video demos are available at https://github.com/huikul/adgnet.
科研通智能强力驱动
Strongly Powered by AbleSci AI