计算机科学
图像融合
人工智能
融合机制
深度学习
融合
基本事实
特征提取
正电子发射断层摄影术
传感器融合
医学影像学
模式识别(心理学)
机器学习
图像(数学)
计算机视觉
核医学
医学
语言学
哲学
脂质双层融合
作者
Wanwan Huang,Han Zhang,Xiongwen Quan,Jia Wang
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:71: 1-17
被引量:7
标识
DOI:10.1109/tim.2022.3169546
摘要
The research of deep learning-based methods for image fusion has become a current hotspot. Medical image fusion with the problem of few samples also lacks a unified end-to-end model for the input of different modal pairs. In this article, we propose a two-level dynamic adaptive network for medical image fusion, which addresses the above two problems and provides a unified fusion framework to take the advantage of different modal pairs. Specifically, we develop a dynamic meta-learning method on task level, which achieves a dynamical meta-knowledge transfer from the heterogeneous task of multifocus image fusion to medical image fusion by dynamic convolution decomposition (DCD). Then, we provide an efficient adaptive fusion method on multimodal feature level, which uses dynamic attention mechanism and dynamic channel fusion mechanism to fuse features of different aspects. For model evaluation, we have done the qualitative and quantitative tests on the transferred multifocus deep network and verified its superior fusion performance. On this basis, the experiments are carried out on the public datasets of the two most commonly used modal pairs (computerized tomography (CT)-magnetic resonance imaging (MRI) and positron emission tomography (PET)-MRI) and show that our hierarchical model is superior to the state-of-the-art methods in terms of visual effects and quantitative measurement. Our code is publicly available at https://github.com/zhanglabNKU/TDAN .
科研通智能强力驱动
Strongly Powered by AbleSci AI