计算机科学
人工智能
神经影像学
模态(人机交互)
融合
保险丝(电气)
模式识别(心理学)
传感器融合
特征(语言学)
过程(计算)
机器学习
利用
数据挖掘
情态动词
工程类
医学
哲学
电气工程
精神科
操作系统
化学
高分子化学
语言学
计算机安全
作者
Tao Zhang,Mingyang Shi
标识
DOI:10.1016/j.jneumeth.2020.108795
摘要
Compared with single-modal neuroimages classification of AD, multi-modal classification can achieve better performance by fusing different information. Exploring synergy among various multi-modal neuroimages is contributed to identifying the pathological process of neurological disorders. However, it is still problematic to effectively exploit multi-modal information since the lack of an effective fusion method. In this paper, we propose a deep multi-modal fusion network based on the attention mechanism, which can selectively extract features from MRI and PET branches and suppress irrelevant information. In the attention model, the fusion ratio of each modality is assigned automatically according to the importance of the data. A hierarchical fusion method is adopted to ensure the effectiveness of Multi-modal Fusion. Evaluating the model on the ADNI dataset, the experimental results show that it outperforms the state-of-the-art methods. In particular, the final classification results of the NC/AD, SMCI/PMCI and Four-Class are 95.21 %, 89.79 %, and 86.15 %, respectively. : Different from the early fusion and the late fusion, the hierarchical fusion method contributes to learning the synergy between the multi-modal data. Compared with some other prominent algorithms, the attention model enables our network to focus on the regions of interest and effectively fuse the multi-modal data. Benefit from the hierarchical structure with attention model, the proposed network is capable of exploiting low-level and high-level features extracted from the multi-modal data and improving the accuracy of AD diagnosis. Results show its promising performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI