计算机科学
残余物
计算
网络体系结构
利用
残差神经网络
基线(sea)
人工智能
图像(数学)
特征(语言学)
模式识别(心理学)
深度学习
方案(数学)
算法
数学
海洋学
地质学
数学分析
哲学
语言学
计算机安全
作者
Dongwei Ren,Wangmeng Zuo,Qinghua Hu,Pengfei Zhu,Deyu Meng
标识
DOI:10.1109/cvpr.2019.00406
摘要
Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with unsubstantial degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at https://github.com/csdwren/PReNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI