计算机科学
水准点(测量)
背景(考古学)
过程(计算)
任务(项目管理)
人工智能
编码器
图像(数学)
深度学习
模式识别(心理学)
操作系统
古生物学
经济
管理
地理
生物
大地测量学
作者
Kailong Lin,Shaowei Zhang,Yu Luo,Jie Ling
标识
DOI:10.1016/j.vrih.2022.06.002
摘要
Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI