计算机科学
合并(版本控制)
渲染(计算机图形)
计算机视觉
人工智能
增强现实
计算机图形学(图像)
比例(比率)
并行计算
物理
量子力学
作者
Max Bergfelt,Viktor Larsson,Hideo Saitô,Shohei Mori
标识
DOI:10.1109/ismar-adjunct60411.2023.00117
摘要
A recent singleshot multiplane image (MPI) generation enables to copy an observed reality within a camera frame into other reality domains via view synthesis. While the scene scale is unknown due to the nature of singleshot MPI processing, camera tracking algorithms can estimate depth within the application world coordinate system. Given such depth information, we propose to adjust the scale of singleshot MPI to that of the currently observed scene. We find the individual scales of the MPI layers by minimizing the differences between the depth of MPI rendering and that of camera tracking. We eventually found that many layers fall within a close depth. Therefore, we merge such layers into one to compact the MPI representation. We compared our method with baselines using real and synthetic datasets with dense and sparse depth inputs. Our results demonstrate that our algorithm achieves higher scores in image metrics and reduces MPI data amount by up to 78%.
科研通智能强力驱动
Strongly Powered by AbleSci AI