计算机科学
人工智能
计算机视觉
图像处理
图像(数学)
语义学(计算机科学)
自然语言处理
程序设计语言
作者
Dahua Li,Zixuan Wang,Xuan Li,Yu Xiao,Qiang Gao,Qi Liu
标识
DOI:10.1117/1.jei.33.5.053053
摘要
The study and analysis of underwater image enhancement are significant in marine engineering and aquatic robotics. The data-driven approach shows good performance in this field. However, the method faces challenges in addressing issues such as low contrast, blurring, and color deviation. Moreover, its performance is constrained by the availability of paired underwater images, making it challenging to capture the nuances among different underwater scenes. To address these challenges, we introduce a semantic attention-guided transfer learning method for stylization underwater image enhancement (SGTL-SUIE). This method enables the generation of multiple stylized and enhanced images from a single distorted underwater image. Within SGTL-SUIE, a style-filtering module is proposed to better bridge the domain gap between distorted images and style reference images. Subsequently, a semantic pairing module further mitigates the domain differences across varying semantics between reference and distorted images, guided by semantic information, producing multiple semantic pairing codes. The transfer enhancement module then takes these semantic pairing codes, co-encoding style features with distorted image features through an encoder. A decoder network subsequently decodes the encoded features into diverse stylized outputs. To validate the proposed method's performance, qualitative and quantitative evaluations were conducted on multiple public datasets. The results demonstrate that the SGTL-SUIE method outperforms many state-of-the-art approaches and enhances the stylistic diversity of generated images.
科研通智能强力驱动
Strongly Powered by AbleSci AI