计算机科学
人工智能
代表(政治)
图像融合
降级(电信)
图像处理
模式识别(心理学)
图像(数学)
计算机视觉
融合
编码(集合论)
融合规则
内容(测量理论)
源代码
人工神经网络
特征提取
无监督学习
传感器融合
融合机制
上下文图像分类
特征学习
特征检测(计算机视觉)
作者
Han Xu,Xunpeng Yi,Lu Chen,Guangcan Liu,Jiayi Ma
标识
DOI:10.1109/tip.2025.3607628
摘要
When dealing with low-quality source images, existing image fusion methods either fail to handle degradations or are restricted to specific degradations. This study proposes an unsupervised unified degradation-robust image fusion network, termed as URFusion, in which various types of degradations can be uniformly eliminated during the fusion process, leading to high-quality fused images. URFusion is composed of three core modules: intrinsic content extraction, intrinsic content fusion, and appearance representation learning and assignment. It first extracts degradation-free intrinsic content features from images affected by various degradations. These content features then provide feature-level rather than image-level fusion constraints for optimizing the fusion network, effectively eliminating degradation residues and reliance on ground truth. Finally, URFusion learns the appearance representation of images and assigns the statistical appearance representation of high-quality images to the content-fused result, producing the final high-quality fused image. Extensive experiments on multi-exposure image fusion and multi-modal image fusion tasks demonstrate the advantages of URFusion in fusion performance and suppression of multiple types of degradations. The code is available at https://github.com/hanna-xu/URFusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI