计算机科学
人工智能
变压器
图像融合
深度学习
特征提取
冗余(工程)
像素
计算机视觉
模式识别(心理学)
图像(数学)
工程类
电压
电气工程
操作系统
作者
Yumeng Song,Yin Dai,Weibin Liu,Yue Liu,Lei Zhu,Qi Yu,Xinghan Liu,Ningfeng Que,Mingzhe Li
标识
DOI:10.1016/j.compbiomed.2024.108463
摘要
Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI