计算机科学
人工智能
编码器
变压器
像素
图像融合
模式识别(心理学)
计算机视觉
特征提取
数据挖掘
图像(数学)
物理
量子力学
电压
操作系统
作者
Yanyu Liu,Yongsheng Zang,Dongming Zhou,Jinde Cao,Rencan Nie,Ruichao Hou,Zhaisheng Ding,Jiatian Mei
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2023-07-01
卷期号:27 (7): 3489-3500
被引量:7
标识
DOI:10.1109/jbhi.2023.3264819
摘要
Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI