水下
人工智能
图像增强
无监督学习
深度学习
图像(数学)
计算机科学
分布(数学)
计算机视觉
数学
地质学
海洋学
数学分析
作者
Alzayat Saleh,Marcus Sheaves,Dean R. Jerry,Mostafa Rahimi Azghadi
摘要
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel framework called Uncertainty Distribution Network (UDnet), which learns to adapt to Uncertainty Distribution in its unsupervised reference map (label) generation to produce enhanced output images. UDnet is composed of three main parts. A raw underwater image is first adjusted for contrast, saturation, and gamma correction; one of these adjusted images is then randomly fed to (1) a statistically guided multi-colour space stretch (SGMCSS) module that generates a reference map to be used by (2) a U-Net-like conditional variational autoencoder (cVAE) module, to extract features for feeding to (3) a probabilistic adaptive instance normalization (PAdaIN) block that encodes feature uncertainties for final enhanced image generation. We use the SGMCSS module to ensure visual consistency with the raw input image and to provide an alternative to training using a ground truth image. Hence, UDnet does not need manual human annotation and can learn with a limited amount of data to achieve state-of-the-art results. We evaluated UDnet on eight publicly-available datasets. The results show that it yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Our code is publicly available at https://github.com/alzayats/UDnet}{https://github.com/alzayats/UDnet.
科研通智能强力驱动
Strongly Powered by AbleSci AI