计算机科学
特征(语言学)
水下
块(置换群论)
图像(数学)
图像质量
人工智能
图像增强
计算机视觉
透视图(图形)
模式识别(心理学)
地质学
哲学
海洋学
语言学
数学
几何学
作者
Jingchun Zhou,Dehuan Zhang,Weishi Zhang
标识
DOI:10.1016/j.engappai.2023.105952
摘要
Single underwater image enhancement remains a challenging ill-posed problem, even with advanced deep learning methods, due to the significant information degeneration and various irrelevant contents. Current deep learning-based underwater image enhancement methods only consider using a single clear image as a positive feature for guiding the training of the enhancement network. However, the limited amount of helpful information constrains the network performance, and irrelevant contents consume many bits. Therefore, it is crucial to efficiently utilize cross-view neighboring features and provide corresponding relevant information for underwater enhancement. To address the challenges of degraded underwater images, we propose a novel cross-domain enhancement network (CVE-Net) that uses high-efficiency feature alignment to utilize neighboring features better. We employ a self-built database to optimize the helpful information and develop a feature alignment module (FAM) to adapt the temporal features. The dual-branch attention block is designed to handle different types of information and give more weight to essential features. Experiments demonstrate that CVE-Net outperforms state-of-the-art (SOTA) underwater vision enhancement methods in terms of both qualitatively and quantitatively results, significantly boosts the performance on underwater image quality, achieving a PSNR of 28.28 dB, which is 25% higher than Ucolor on the multi-view dataset. CVE-Net improves image quality while maintaining a good complexity-performance trade-off.
科研通智能强力驱动
Strongly Powered by AbleSci AI