点云
计算机科学
深度图
人工智能
特征(语言学)
水准点(测量)
卷积(计算机科学)
RGB颜色模型
模式识别(心理学)
特征提取
点(几何)
计算机视觉
像素
人工神经网络
图像(数学)
数学
地理
哲学
语言学
几何学
大地测量学
作者
Yu Zhu,Zehua Sheng,Zili Zhou,Lun Luo,Si-Yuan Cao,Gu Hong,Huaqi Zhang,Hui-Liang Shen
标识
DOI:10.1109/iccv51070.2023.00802
摘要
Guided depth completion aims to recover dense depth maps by propagating depth information from the given pixels to the remaining ones under the guidance of RGB images. However, most of the existing methods achieve this using a large number of iterative refinements or stacking repetitive blocks. Due to the limited receptive field of conventional convolution, the generalizability with respect to different sparsity levels of input depth maps is impeded. To tackle these problems, we propose a feature point cloud aggregation framework to directly propagate 3D depth information between the given points and the missing ones. We extract 2D feature map from images and transform the sparse depth map to point cloud to extract sparse 3D features. By regarding the extracted features as two sets of feature point clouds, the depth information for a target location can be reconstructed by aggregating adjacent sparse 3D features from the known points using cross attention. Based on this, we design a neural network, called as PointDC, to complete the entire depth information reconstruction process. Experimental results show that, our PointDC achieves superior or competitive results on the KITTI benchmark and NYUv2 dataset. In addition, the proposed PointDC demonstrates its higher generalizability to different sparsity levels of the input depth maps and cross-dataset evaluation.
科研通智能强力驱动
Strongly Powered by AbleSci AI