展开图
计算机视觉
激光雷达
全向天线
人工智能
计算机科学
融合
计算机图形学(图像)
遥感
地质学
电信
语言学
哲学
天线(收音机)
作者
Ruyu Liu,Yao Qin,Yuqi Pan,Qi Li,Bo Sun,Jianhua Zhang
出处
期刊:IEEE Transactions on Consumer Electronics
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-1
标识
DOI:10.1109/tce.2024.3396812
摘要
To create immersive 3D metaverse and digital twin experiences, 3D perception of scenes for consumer electronic products is crucial. Therefore, it is vital for consumer electronic products to generate precise and dense depth estimation. Existing technologies can produce decent depth estimates in small-scale indoor scenes. However, depth estimation of large-scale outdoor omnidirectional 3D scenes is indispensable in practical applications. Nevertheless, existing technologies fail to deliver high-quality omnidirectional depth estimation results. To address these challenges, we propose a novel omnidirectional depth completion network (DODCNet) designed explicitly for outdoor scenes, leveraging panorama-LiDAR sensors. This framework incorporates crossmodal fusion and distortion sensing, comprising two stages: the panoramic depth feature completion network (PDFCN) and the RGB-guided panoramic depth refinement network (RGB-PDRN). The PDFCN generates density-balanced geometric depth features to bridge the gap between cross-modalities. The RGB-PDRN further integrates cross-modal features at the channel level using attention mechanisms. Additionally, we introduce deformable spherical convolution to efficiently extract panoramic features and employ a panoramic depth-aware loss function to enhance the accuracy of omnidirectional depth estimation. Extensive experiments demonstrate that our proposed DODCNet outperforms state-of-the-art methods on the proposed panorama-LiDAR 360RGBD dataset and Holicity datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI