计算机科学
计算机图形学(图像)
事件(粒子物理)
人工智能
计算机视觉
高斯分布
物理
量子力学
作者
Zhenwei Chen,Zhan Lu,De Ma,Huajin Tang,Xudong Jiang,Qian Zheng,Gang Pan
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2025-04-11
卷期号:39 (3): 2367-2375
标识
DOI:10.1609/aaai.v39i3.32237
摘要
High Dynamic Range (HDR) video reconstruction seeks to accurately restore the extensive dynamic range present in real-world scenes and is widely employed in downstream applications. Existing methods typically operate on one or a small number of consecutive frames, which often leads to inconsistent brightness across the video due to their limited perspective on the video sequence. Moreover, supervised learning-based approaches are susceptible to data bias, resulting in reduced effectiveness when confronted with test inputs exhibiting a domain gap relative to the training data. To address these limitations, we present an event-guided HDR video reconstruction method through building 3D Gaussian Splatting (3DGS), to ensure consistent brightness imposed by 3D consistency. We introduce HDR 3D Gaussians capable of simultaneously representing HDR and low-dynamic-range (LDR) colors. Furthermore, we incorporate a learnable HDR-to-LDR transformation optimized by input event streams and LDR frames to eliminate the data bias. Experimental results on both synthetic and real-world datasets demonstrate that the proposed method achieves state-of-the-art performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI