计算机科学
情态动词
连接体
图论
人工智能
图形
模式识别(心理学)
功能连接
神经科学
理论计算机科学
数学
材料科学
心理学
组合数学
高分子化学
作者
Yanwu Yang,Xutao Guo,Zhikai Chang,Chenfei Ye,Yang Xiang,Ting Ma
标识
DOI:10.1109/bibm55620.2022.9995642
摘要
Multi-modal neuroimaging technology has greatly facilitated the diagnosis efficiency and diagnosis accuracy, and provides complementary information in discovering objective disease biomarkers. Conventional deep learning methods, e.g. convolutional neural networks, overlook relationships between nodes and fail to capture topological properties in graphs. Graph neural networks have been proven to be of great importance in modeling brain connectome networks and relating disease-specific patterns. However, most existing graph methods explicitly require known graph structures, which are not available in the sophisticated brain system. Especially in heterogeneous multi-modal brain networks, there exists a great challenge to model interactions among brain regions in consideration of inter-modal dependencies. In this study, we propose a Multimodal Dynamic Graph Convolution Network (MDGCN) for structural and functional brain network learning. Our method benefits from modeling inter-modal representations and relating attentive multi-model associations into dynamic graphs with a compositional correspondence matrix. Moreover, a bilateral graph convolution layer is proposed to aggregate multi-modal representations in terms of multi-modal associations. Extensive experiments on three datasets demonstrate the superiority of our proposed method in terms of disease classification, with the accuracy of 90.4%, 85.9% and 98.3% in predicting Mild Cognitive Impairment, Parkinson's Disease, and Schizophrenia respectively. Our statistical evaluations on the correspondence matrix exhibit a high correspondence with previous evidence of biomarkers.
科研通智能强力驱动
Strongly Powered by AbleSci AI