深度图
人工智能
计算机科学
计算机视觉
激光雷达
模式识别(心理学)
图像(数学)
遥感
地质学
作者
Jiade Liu,Cheolkon Jung
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:10: 114252-114261
被引量:5
标识
DOI:10.1109/access.2022.3215546
摘要
In this paper, we propose new normal guided depth completion from sparse LiDAR data and single color image, named NNNet. Sparse depth completion often uses normal maps as a constraint for model training. However, direct construction of a normal map from the color image causes a lot of noise in the normal map and reduces the model performance. Thus, we use a new normal map as an intermediate constraint to promote the fusion of multi-modal features. We generate the new normal map from the sparse LiDAR depth data to use it as a constraint for network training. The new normal map is generated by converting the input depth into a grayscale image, constructing a normal map, replacing the Z channel of the normal map with the original depth, and finally adding a mask. Based on the new normal map, we construct an end-to-end network NNNet for sparse depth completion guided by its corresponding color image. NNNet consists of two branches. The one branch generates the new normal map from the depth image and its corresponding color image, while the other branch constructs a dense depth image from the sparse depth and the predicted new normal map. The two branches fully merge the features through skip connection. In loss function, we use L2 loss to ensure that the new normal map plays a restrictive role. Finally, we generate the dense depth image by refining it with a spatial propagation network. Experimental results show that the new normal map provides effective constraints for sparse depth completion. Moreover, NNNet achieves 724.14 in terms of RMSE and outperforms most of the current state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI