Neural implicit surfaces learning for multi-view reconstruction
计算机科学
人工智能
计算机视觉
作者
Yingyu Zhang,Bin Qin,Hao Xin,Wei Yang
标识
DOI:10.1117/12.3025454
摘要
Neural Radiance Field (NeRF), a compelling technique in the field of Computer Vision, represents a novel approach to view synthesis using implicit scene representation. By learning to represent 3D scenes from images, its goal is to render photorealistic images of the scene from unobserved viewpoints, showcasing the immense potential of neural volumetric representations. As a novel view synthesis and 3D reconstruction method, NeRF models find applications in robotics, urban mapping, autonomous navigation, virtual reality augmented reality, and more. In this article, we introduce a new 3D surface representation method called Signed Distance Function Field (SDF). We have developed a new volume rendering technique for training a neural SDF representation. In our research process, we noticed that traditional volume rendering methods have poor imaging performance in complex structures and self occluding images during surface reconstruction. Therefore, we have designed a new neural network to reduce the impact of complex structures and self occluding objects on 3D reconstruction. This results in more precise surface reconstruction, even in the absence of mask supervision. Our experiments, conducted on both the DTU dataset, demonstrate that this superiority is particularly evident when dealing with objects and scenes characterized by intricate structures and self-occlusion.