人工智能
计算机科学
网格
体素
深度学习
人工神经网络
计算机视觉
三维重建
背景(考古学)
模式识别(心理学)
交叉熵
师(数学)
转化(遗传学)
迭代重建
特征(语言学)
数学
地理
基因
生物化学
几何学
哲学
考古
语言学
化学
算术
作者
Jun Yu,Wenbin Yin,Zhi‐Yi Hu,Yabin Liu
标识
DOI:10.1016/j.compeleceng.2022.108567
摘要
Deep learning-based 3D reconstruction neural networks have achieved good performance on generating 3D features from 2D features. However, they often lead to feature loss in reconstruction. In this paper we propose a multi-view object 3D reconstruction neural network, named P2VNet. The depth estimation module of the front and back layers of P2VNet realizes the smooth transformation from 2D features to 3D features, which improves the performance of single view reconstruction. A multi-scale fusion sensing module in multi-view fusion is also proposed, where more receptive fields are added to generate richer context-aware features. We also introduce 3DFocal Loss to replace binary cross-entropy to address the problems of unbalanced space occupation of the voxel grid and complex division of partial grid occupation. Our experimental results have demonstrated that P2VNet has achieved higher accuracy than existing works.
科研通智能强力驱动
Strongly Powered by AbleSci AI