计算机科学
联营
人工智能
图像融合
多光谱图像
源代码
频道(广播)
图像(数学)
特征(语言学)
领域(数学分析)
模式识别(心理学)
计算机视觉
数学分析
哲学
操作系统
语言学
数学
计算机网络
作者
Jingxue Huang,Xiaosong Li,Haishu Tan,Xiaoqi Cheng
标识
DOI:10.1007/978-3-031-46317-4_2
摘要
As a powerful and continuously sought-after medical assistance technique, multimodal medical image fusion integrates the useful information from different single-modal medical images into a fused one. Nevertheless, existing deep learning-based methods often feed source images into a single network without considering the information among different channels and scales, which may inevitably lose the important information. To solve this problem, we proposed a multimodal medical image fusion method based on multichannel aggregated network. By iterating different residual densely connected blocks to efficiently extract the image features at three scales, as well as extracting the spatial domain, channel and fine-grained feature information of the source image at each scale separately. Simultaneously, we introduced multispectral channel attention to address the global average pooling problem of the vanilla channel attention mechanism. Extensive fusion experiments demonstrated that the proposed method surpasses some representative state-of-the-art methods in terms of both subjective and objective evaluation. The code of this work is available at https://github.com/JasonWong30/MCAFusion .
科研通智能强力驱动
Strongly Powered by AbleSci AI