判别式
可解释性
人工智能
计算机科学
典型相关
模式识别(心理学)
深度学习
特征提取
机器学习
特征(语言学)
卷积神经网络
代表(政治)
特征学习
法学
哲学
政治
语言学
政治学
作者
Qi Zhu,Bingliang Xu,Jiashuang Huang,Heyang Wang,Ruting Xu,Wei Shao,Daoqiang Zhang
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2022-12-19
卷期号:42 (5): 1472-1483
被引量:24
标识
DOI:10.1109/tmi.2022.3230750
摘要
Multi-modal fusion has become an important data analysis technology in Alzheimer's disease (AD) diagnosis, which is committed to effectively extract and utilize complementary information among different modalities. However, most of the existing fusion methods focus on pursuing common feature representation by transformation, and ignore discriminative structural information among samples. In addition, most fusion methods use high-order feature extraction, such as deep neural network, by which it is difficult to identify biomarkers. In this paper, we propose a novel method named deep multi-modal discriminative and interpretability network (DMDIN), which aligns samples in a discriminative common space and provides a new approach to identify significant brain regions (ROIs) in AD diagnosis. Specifically, we reconstruct each modality with a hierarchical representation through multilayer perceptron (MLP), and take advantage of the shared self-expression coefficients constrained by diagonal blocks to embed the structural information of inter-class and the intra-class. Further, the generalized canonical correlation analysis (GCCA) is adopted as a correlation constraint to generate a discriminative common space, in which samples of the same category gather while samples of different categories stay away. Finally, in order to enhance the interpretability of the deep learning model, we utilize knowledge distillation to reproduce coordinated representations and capture influence of brain regions in AD classification. Experiments show that the proposed method performs better than several state-of-the-art methods in AD diagnosis.
科研通智能强力驱动
Strongly Powered by AbleSci AI