抓住
计算机科学
编码器
特征(语言学)
人工智能
对象(语法)
比例(比率)
机器人
方向(向量空间)
计算机视觉
模式识别(心理学)
数学
程序设计语言
哲学
语言学
物理
几何学
量子力学
操作系统
作者
Xungao Zhong,Xianghui Liu,Tao Gong,Yuan Sun,Huosheng Hu,Qiang Liu
出处
期刊:Applied sciences
[Multidisciplinary Digital Publishing Institute]
日期:2024-06-12
卷期号:14 (12): 5097-5097
被引量:2
摘要
Grasping robots always confront challenges such as uncertainties in object size, orientation, and type, necessitating effective feature augmentation to improve grasping detection performance. However, many prior studies inadequately emphasize grasp-related features, resulting in suboptimal grasping performance. To address this limitation, this paper proposes a new grasping approach termed the Feature-Augmented Grasp Detection Network (FAGD-Net). The proposed network incorporates two modules designed to enhance spatial information features and multi-scale features. Firstly, we introduce the Residual Efficient Multi-Scale Attention (Res-EMA) module, which effectively adjusts the importance of feature channels while preserving precise spatial information within those channels. Additionally, we present a Feature Fusion Pyramidal Module (FFPM) that serves as an intermediary between the encoder and decoder, effectively addressing potential oversights or losses of grasp-related features as the encoder network deepens. As a result, FAGD-Net achieved advanced levels of grasping accuracy, with 98.9% and 96.5% on the Cornell and Jacquard datasets, respectively. The grasp detection model was deployed on a physical robot for real-world grasping experiments, where we conducted a series of trials in diverse scenarios. In these experiments, we randomly selected various unknown household items and adversarial objects. Remarkably, we achieved high success rates, with a 95.0% success rate for single-object household items, 93.3% for multi-object scenarios, and 91.0% for cluttered scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI