培训(气象学)
扩散
图像(数学)
计算机科学
图像增强
人工智能
计算机视觉
地理
物理
气象学
热力学
作者
Yunlong Lin,Ye Tian,Sixiang Chen,Zhenqi Fu,Yingying Wang,Wenhao Chai,Zhaohu Xing,Lei Zhu,Xinghao Ding
出处
期刊:Cornell University - arXiv
日期:2024-07-20
标识
DOI:10.48550/arxiv.2407.14900
摘要
Existing low-light image enhancement (LIE) methods have achieved noteworthy success in solving synthetic distortions, yet they often fall short in practical applications. The limitations arise from two inherent challenges in real-world LIE: 1) the collection of distorted/clean image pairs is often impractical and sometimes even unavailable, and 2) accurately modeling complex degradations presents a non-trivial problem. To overcome them, we propose the Attribute Guidance Diffusion framework (AGLLDiff), a training-free method for effective real-world LIE. Instead of specifically defining the degradation process, AGLLDiff shifts the paradigm and models the desired attributes, such as image exposure, structure and color of normal-light images. These attributes are readily available and impose no assumptions about the degradation process, which guides the diffusion sampling process to a reliable high-quality solution space. Extensive experiments demonstrate that our approach outperforms the current leading unsupervised LIE methods across benchmarks in terms of distortion-based and perceptual-based metrics, and it performs well even in sophisticated wild degradation.
科研通智能强力驱动
Strongly Powered by AbleSci AI