点云
计算机科学
人工智能
判别式
计算机视觉
特征(语言学)
图像配准
特征匹配
点(几何)
模式识别(心理学)
匹配(统计)
特征提取
数学
图像(数学)
统计
哲学
语言学
几何学
作者
Fengguang Xiong,Yu Kong,Shuaikang Xie,Liqun Kuang,Xie Han
标识
DOI:10.1038/s41598-024-56217-9
摘要
Abstract Deformable attention only focuses on a small group of key sample-points around the reference point and make itself be able to capture dynamically the local features of input feature map without considering the size of the feature map. Its introduction into point cloud registration will be quicker and easier to extract local geometric features from point cloud than attention. Therefore, we propose a point cloud registration method based on Spatial Deformable Transformer (SDT). SDT consists of a deformable self-attention module and a cross-attention module where the deformable self-attention module is used to enhance local geometric feature representation and the cross-attention module is employed to enhance feature discriminative capability of spatial correspondences. The experimental results show that compared to state-of-the-art registration methods, SDT has a better matching recall, inlier ratio, and registration recall on 3DMatch and 3DLoMatch scene, and has a better generalization ability and time efficiency on ModelNet40 and ModelLoNet40 scene.
科研通智能强力驱动
Strongly Powered by AbleSci AI