Deep Fusion for Multi-Modal 6D Pose Estimation

人工智能 模式 RGB颜色模型 计算机科学 姿势 模态(人机交互) 计算机视觉 点云 融合机制 情态动词 特征(语言学) 模式识别(心理学) 融合 社会学 高分子化学 化学 哲学 脂质双层融合 语言学 社会科学
作者
Shifeng Lin,Zunran Wang,Shenghao Zhang,Yonggen Ling,Chenguang Yang
出处
期刊:IEEE Transactions on Automation Science and Engineering [Institute of Electrical and Electronics Engineers]
卷期号:21 (4): 6540-6549 被引量:7
标识
DOI:10.1109/tase.2023.3327772
摘要

6D pose estimation with individual modality encounters difficulties due to the limitations of modalities, such as RGB information on textureless objects and depth on reflective objects. This can be improved by exploiting the complementarity between modalities. Most of the previous methods only consider the correspondence between point clouds and RGB images and directly extract the features of the corresponding two modalities for fusion, which ignore the information of the modality itself and are negatively affected by erroneous background information when introducing more features for fusion. To enhance the complementarities between multiple modalities, we propose a neighbor-based cross-modalities attention mechanism for multi-modal 6D pose estimation. Neighbors represent that the RGB features of multiple neighbor are applied for fusion, which expands the receptive field. The cross-modalities attention mechanism leverages the similarities between the different modal features to help modal feature fusion, which reduces the negative impact of incorrect background information. Moreover, we design some features between the rendered image and the original image to obtain the confidence of pose estimation results. Experimental results on LM, LM-O and YCB-V datasets demonstrate the effectiveness of our methods. Video is available at https://www.youtube.com/watch?v=ApNBcX6NEGs. Note to Practitioners —Introducing the information of surrounding points during multi-modal fusion improves the performance of 6D pose estimation. For example, the RGB image corresponding to some point clouds on the object may lack rich texture features while the neighbors exist. However, most methods of modal fusion based on RGBD for 6D pose estimation only simply consider the corresponding between RGB images and point clouds for feature fusion, which may bring redundant information or the wrong background information when introducing neighbor information. In this paper, we propose a cross-modal attention mechanism based on neighbor information. By introducing the information of the modality itself to obtain the weight of the neighbor information of another modality in the encoding and decoding stages, the receptive field is expanded and the complementarities between different modalities are enhanced. The experiment shows our effectiveness. In addition, we provide a pose confidence estimator for predicted pose results. Specifically, the rendered image with the predicted pose and the real image are applied to extract features for the decision tree. The experimental results show that the result of the wrong estimation can be eliminated with high accuracy and recall. The 6D pose confidence can provide a reference for real-world grasping. However, the current method can only estimate objects with known models. In the future, we will consider applying the method to unseen objects.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
舒苏完成签到,获得积分10
1秒前
初雪发布了新的文献求助30
3秒前
4秒前
sapphire发布了新的文献求助10
4秒前
量子星尘发布了新的文献求助10
4秒前
4秒前
满意的匪完成签到 ,获得积分10
5秒前
阿斯披粼完成签到,获得积分10
5秒前
Akim应助JING采纳,获得10
8秒前
不喜发布了新的文献求助10
9秒前
9秒前
SuYan完成签到 ,获得积分10
10秒前
爆米花应助sapphire采纳,获得10
10秒前
12秒前
superxiao应助melody采纳,获得10
12秒前
和谐夏彤完成签到,获得积分10
13秒前
13秒前
14秒前
yuliuism应助青葱鱼块采纳,获得20
15秒前
Paperduoduo完成签到,获得积分10
15秒前
火星上的沛春完成签到,获得积分10
16秒前
探索未知的世界完成签到,获得积分10
16秒前
AoAoo发布了新的文献求助10
19秒前
hx0841发布了新的文献求助10
19秒前
xie发布了新的文献求助10
19秒前
酷酷静白完成签到 ,获得积分10
20秒前
23秒前
24秒前
量子星尘发布了新的文献求助10
27秒前
青葱鱼块发布了新的文献求助10
28秒前
30秒前
jojo发布了新的文献求助10
30秒前
瑁mao完成签到 ,获得积分10
32秒前
小聂每天都想毕业啊啊啊完成签到,获得积分20
32秒前
amberzyc应助fw97采纳,获得10
32秒前
奋斗的傀斗完成签到 ,获得积分10
36秒前
Coai517完成签到 ,获得积分10
36秒前
hx0841完成签到,获得积分10
38秒前
yyh发布了新的文献求助10
38秒前
甜美的可冥完成签到,获得积分10
39秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
《药学类医疗服务价格项目立项指南(征求意见稿)》 880
Stop Talking About Wellbeing: A Pragmatic Approach to Teacher Workload 800
花の香りの秘密―遺伝子情報から機能性まで 800
3rd Edition Group Dynamics in Exercise and Sport Psychology New Perspectives Edited By Mark R. Beauchamp, Mark Eys Copyright 2025 600
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
Terminologia Embryologica 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5618349
求助须知:如何正确求助?哪些是违规求助? 4703244
关于积分的说明 14921791
捐赠科研通 4757233
什么是DOI,文献DOI怎么找? 2550059
邀请新用户注册赠送积分活动 1512904
关于科研通互助平台的介绍 1474299