计算机科学
特征(语言学)
人工智能
特征学习
机器学习
图形
情态动词
人工神经网络
模式
模态(人机交互)
传感器融合
编码(集合论)
模式识别(心理学)
数据挖掘
理论计算机科学
社会科学
集合(抽象数据类型)
社会学
程序设计语言
哲学
语言学
化学
高分子化学
作者
Baiying Lei,Yafeng Li,Wanyi Fu,Peng Yang,Shaobin Chen,Tianfu Wang,Xiaohua Xiao,Aihua Mao,Yu Fu,Shuqiang Wang,Hongbin Han,Jing Qin
标识
DOI:10.1016/j.media.2024.103213
摘要
Multi-modal data can provide complementary information of Alzheimer's disease (AD) and its development from different perspectives. Such information is closely related to the diagnosis, prevention, and treatment of AD, and hence it is necessary and critical to study AD through multi-modal data. Existing learning methods, however, usually ignore the influence of feature heterogeneity and directly fuse features in the last stages. Furthermore, most of these methods only focus on local fusion features or global fusion features, neglecting the complementariness of features at different levels and thus not sufficiently leveraging information embedded in multi-modal data. To overcome these shortcomings, we propose a novel framework for AD diagnosis that fuses gene, imaging, protein, and clinical data. Our framework learns feature representations under the same feature space for different modalities through a feature induction learning (FIL) module, thereby alleviating the impact of feature heterogeneity. Furthermore, in our framework, local and global salient multi-modal feature interaction information at different levels is extracted through a novel dual multilevel graph neural network (DMGNN). We extensively validate the proposed method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and experimental results demonstrate our method consistently outperforms other state-of-the-art multi-modal fusion methods. The code is publicly available on the GitHub website. (https://github.com/xiankantingqianxue/MIA-code.git)
科研通智能强力驱动
Strongly Powered by AbleSci AI