像素
融合
计算机科学
比例(比率)
人工智能
算法
合并(版本控制)
图像融合
点云
点(几何)
计算机视觉
图像(数学)
模式识别(心理学)
数学
几何学
地理
语言学
地图学
哲学
情报检索
作者
Zhendong Liu,Xiaoli Liu,Hongliang Guan,Jie Yin,Fuzhou Duan,Shuaizhe Zhang,Wenhu Qv
出处
期刊:Isprs Journal of Photogrammetry and Remote Sensing
日期:2023-07-05
卷期号:202: 356-368
被引量:3
标识
DOI:10.1016/j.isprsjprs.2023.06.011
摘要
A depth map fusion algorithm fuses depth maps from different perspectives into a unified coordinate framework and performs surface calculations to generate dense point clouds of the entire scene. The existing algorithms ensure the quality of these dense point clouds by eliminating inconsistencies between depth maps, but the problem of many redundant calculations often arises. In this paper, a depth map fusion algorithm based on pixel region prediction is proposed. First, the image combination is calculated to select a set of candidate neighbor images for each reference image in the scene. Second, voxels and measure estimates are constructed on a coarse scale, and an inference strategy and a corrector are proposed to merge pixel regions at a fine scale guided by the coarse scale. Finally, the deduced pixel regions at the fine scale are used as the image-space constraints for depth fusion. Public and actual oblique images datasets are used for experimental verification. Compared with the famous COLMAP, OPENMVS, Gipuma and ACMP methods, the number of redundant calculations is significantly reduced; according to Data1 ∼ Data9 in the experiment, as the number of images increases, the fusion efficiency is increased by 47.5% to 156.6%; at the same time, the point cloud accuracy is comparable to other methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI