人工智能
模式
RGB颜色模型
计算机科学
姿势
模态(人机交互)
计算机视觉
点云
融合机制
情态动词
特征(语言学)
模式识别(心理学)
融合
社会科学
语言学
哲学
化学
脂质双层融合
社会学
高分子化学
作者
Shifeng Lin,Zunran Wang,Shenghao Zhang,Yonggen Ling,Chenguang Yang
出处
期刊:IEEE Transactions on Automation Science and Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-10
被引量:2
标识
DOI:10.1109/tase.2023.3327772
摘要
6D pose estimation with individual modality encounters difficulties due to the limitations of modalities, such as RGB information on textureless objects and depth on reflective objects. This can be improved by exploiting the complementarity between modalities. Most of the previous methods only consider the correspondence between point clouds and RGB images and directly extract the features of the corresponding two modalities for fusion, which ignore the information of the modality itself and are negatively affected by erroneous background information when introducing more features for fusion. To enhance the complementarities between multiple modalities, we propose a neighbor-based cross-modalities attention mechanism for multi-modal 6D pose estimation. Neighbors represent that the RGB features of multiple neighbor are applied for fusion, which expands the receptive field. The cross-modalities attention mechanism leverages the similarities between the different modal features to help modal feature fusion, which reduces the negative impact of incorrect background information. Moreover, we design some features between the rendered image and the original image to obtain the confidence of pose estimation results. Experimental results on LM, LM-O and YCB-V datasets demonstrate the effectiveness of our methods. Video is available at https://www.youtube.com/watch?v=ApNBcX6NEGs. Note to Practitioners —Introducing the information of surrounding points during multi-modal fusion improves the performance of 6D pose estimation. For example, the RGB image corresponding to some point clouds on the object may lack rich texture features while the neighbors exist. However, most methods of modal fusion based on RGBD for 6D pose estimation only simply consider the corresponding between RGB images and point clouds for feature fusion, which may bring redundant information or the wrong background information when introducing neighbor information. In this paper, we propose a cross-modal attention mechanism based on neighbor information. By introducing the information of the modality itself to obtain the weight of the neighbor information of another modality in the encoding and decoding stages, the receptive field is expanded and the complementarities between different modalities are enhanced. The experiment shows our effectiveness. In addition, we provide a pose confidence estimator for predicted pose results. Specifically, the rendered image with the predicted pose and the real image are applied to extract features for the decision tree. The experimental results show that the result of the wrong estimation can be eliminated with high accuracy and recall. The 6D pose confidence can provide a reference for real-world grasping. However, the current method can only estimate objects with known models. In the future, we will consider applying the method to unseen objects.
科研通智能强力驱动
Strongly Powered by AbleSci AI