计算机科学
束流调整
趋同(经济学)
数学优化
捆绑
架空(工程)
水准点(测量)
计算
加速度
编码(集合论)
分布式计算
功能(生物学)
算法
数学
人工智能
材料科学
物理
大地测量学
集合(抽象数据类型)
复合材料
经典力学
进化生物学
地理
经济
图像(数学)
生物
程序设计语言
经济增长
操作系统
作者
Taosha Fan,Joseph D. Ortiz,Ming Hsiao,Maurizio Monge,Jing Wang,Todd Murphey,Mustafa Mukadam
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:2
标识
DOI:10.48550/arxiv.2305.07026
摘要
Scaling to arbitrarily large bundle adjustment problems requires data and compute to be distributed across multiple devices. Centralized methods in prior works are only able to solve small or medium size problems due to overhead in computation and communication. In this paper, we present a fully decentralized method that alleviates computation and communication bottlenecks to solve arbitrarily large bundle adjustment problems. We achieve this by reformulating the reprojection error and deriving a novel surrogate function that decouples optimization variables from different devices. This function makes it possible to use majorization minimization techniques and reduces bundle adjustment to independent optimization subproblems that can be solved in parallel. We further apply Nesterov's acceleration and adaptive restart to improve convergence while maintaining its theoretical guarantees. Despite limited peer-to-peer communication, our method has provable convergence to first-order critical points under mild conditions. On extensive benchmarks with public datasets, our method converges much faster than decentralized baselines with similar memory usage and communication load. Compared to centralized baselines using a single device, our method, while being decentralized, yields more accurate solutions with significant speedups of up to 953.7x over Ceres and 174.6x over DeepLM. Code: https://joeaortiz.github.io/daba.
科研通智能强力驱动
Strongly Powered by AbleSci AI