计算机科学
色度
编码器
约束(计算机辅助设计)
融合
人工智能
模式识别(心理学)
图像融合
失真(音乐)
图像(数学)
计算机视觉
数学
带宽(计算)
几何学
放大器
哲学
操作系统
亮度
语言学
计算机网络
标识
DOI:10.1016/j.inffus.2021.06.001
摘要
Existing image fusion methods always use the same representations for different modal medical images. Otherwise, they solve the fusion problem by subjectively defining characteristics to be preserved. However, it leads to the distortion of unique information and restricts the fusion performance. To address the limitations, this paper proposes an unsupervised enhanced medical image fusion network. We perform both surface-level and deep-level constraints for enhanced information preservation. The surface-level constraint is based on the saliency and abundance measurement to preserve the subjectively defined and intuitive characteristics. In the deep-level constraint, the unique information is objectively defined based on the unique channels of a pre-trained encoder. Moreover, in our method, the chrominance information of fusion results is also enhanced. It is because we use the high-quality details in structural images (e.g., MRI) to alleviate the mosaic in functional images (e.g., PET, SPECT). Both qualitative and quantitative experiments demonstrate the superiority of our method over the state-of-the-art fusion methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI