水下
计算机科学
人工智能
一致性(知识库)
特征(语言学)
计算机视觉
对比度(视觉)
图像(数学)
图像质量
转化(遗传学)
模式识别(心理学)
语言学
海洋学
哲学
地质学
生物化学
化学
基因
作者
Yifan Xiang,Zhikui Chen
标识
DOI:10.1109/icivc58118.2023.10270633
摘要
Learning a single underwater image enhancement network from unpaired degraded and clear images is of practical interest. In reality, it is almost infeasible to obtain clear reference images corresponding to captured images in the IoT underwater. As a result, training enhancement networks in a supervised manner is challenging in the absence of such paired data. At the same time, existing methods are often insufficient to learn the semantic and texture knowledge inherent in clear images from limited data due to the significant variability between clear and degraded image domains. In response, we propose a two-branch contrast enhancement framework (DCE-Net) for unpaired underwater image enhancement, which learns mutual information between clear and degraded image domains from a limited amount of unpaired data. The proposed DCE-Net consists of a Cyclic Consistency Module (CCM) and a Contrast Enhancement Module (CEM). Specifically, the CCM is designed to guide feature transformation and latent feature learning between underwater clear images and underwater degraded images. the CEM is designed to constrain the consistency of semantic information between underwater clear images and underwater degraded images, encouraging better enhancement and improving image recovery quality. Extensive experiments are conducted using publicly available underwater datasets. The results demonstrate the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI