光辉
异步通信
RGB颜色模型
比例(比率)
人工智能
计算机科学
功能(生物学)
异步学习
计算机视觉
数学
遥感
地理
地图学
同步学习
数学教育
电信
教学方法
生物
进化生物学
合作学习
作者
Zirui Wu,Yuantao Chen,Runyi Yang,Zhenxin Zhu,Chao Hou,Yongliang Shi,Hao Zhao,Guyue Zhou
出处
期刊:Cornell University - arXiv
日期:2022-01-01
标识
DOI:10.48550/arxiv.2211.07459
摘要
It has been shown that learning radiance fields with depth rendering and depth supervision can effectively promote the quality and convergence of view synthesis. However, this paradigm requires input RGB-D sequences to be synchronized, hindering its usage in the UAV city modeling scenario. As there exists asynchrony between RGB images and depth images due to high-speed flight, we propose a novel time-pose function, which is an implicit network that maps timestamps to $\rm SE(3)$ elements. To simplify the training process, we also design a joint optimization scheme to jointly learn the large-scale depth-regularized radiance fields and the time-pose function. Our algorithm consists of three steps: (1) time-pose function fitting, (2) radiance field bootstrapping, (3) joint pose error compensation and radiance field refinement. In addition, we propose a large synthetic dataset with diverse controlled mismatches and ground truth to evaluate this new problem setting systematically. Through extensive experiments, we demonstrate that our method outperforms baselines without regularization. We also show qualitatively improved results on a real-world asynchronous RGB-D sequence captured by drone. Codes, data, and models will be made publicly available.
科研通智能强力驱动
Strongly Powered by AbleSci AI