计算机科学
降噪
编码(集合论)
人工智能
约束(计算机辅助设计)
变压器
过程(计算)
图像(数学)
特征(语言学)
代表(政治)
基本事实
模式识别(心理学)
算法
数学
程序设计语言
法学
政治学
政治
集合(抽象数据类型)
几何学
操作系统
电压
量子力学
物理
哲学
语言学
作者
Bin Xia,Yulun Zhang,Shiyin Wang,Yitong Wang,Xing‐Long Wu,Yapeng Tian,Wenming Yang,Luc Van Gool
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:2
标识
DOI:10.48550/arxiv.2303.09472
摘要
Diffusion model (DM) has achieved SOTA performance by modeling the image synthesis process into a sequential application of a denoising network. However, different from image synthesis, image restoration (IR) has a strong constraint to generate results in accordance with ground-truth. Thus, for IR, traditional DMs running massive iterations on a large model to estimate whole images or feature maps is inefficient. To address this issue, we propose an efficient DM for IR (DiffIR), which consists of a compact IR prior extraction network (CPEN), dynamic IR transformer (DIRformer), and denoising network. Specifically, DiffIR has two training stages: pretraining and training DM. In pretraining, we input ground-truth images into CPEN$_{S1}$ to capture a compact IR prior representation (IPR) to guide DIRformer. In the second stage, we train the DM to directly estimate the same IRP as pretrained CPEN$_{S1}$ only using LQ images. We observe that since the IPR is only a compact vector, DiffIR can use fewer iterations than traditional DM to obtain accurate estimations and generate more stable and realistic results. Since the iterations are few, our DiffIR can adopt a joint optimization of CPEN$_{S2}$, DIRformer, and denoising network, which can further reduce the estimation error influence. We conduct extensive experiments on several IR tasks and achieve SOTA performance while consuming less computational costs. Code is available at \url{https://github.com/Zj-BinXia/DiffIR}.
科研通智能强力驱动
Strongly Powered by AbleSci AI