人工智能
认知
计算机科学
机器学习
深度学习
任务(项目管理)
特征(语言学)
模式识别(心理学)
神经影像学
卷积神经网络
神经科学
心理学
语言学
哲学
经济
管理
作者
Wei Hu,Xiang-He Meng,Yuntong Bai,Aiying Zhang,Gang Qu,Biao Cai,Gemeng Zhang,Tony W. Wilson,Julia M. Stephen,Vince D. Calhoun,Yuping Wang
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2021-05-01
卷期号:40 (5): 1474-1483
被引量:25
标识
DOI:10.1109/tmi.2021.3057635
摘要
The combination of multimodal imaging and genomics provides a more comprehensive way for the study of mental illnesses and brain functions. Deep network-based data fusion models have been developed to capture their complex associations, resulting in improved diagnosis of diseases. However, deep learning models are often difficult to interpret, bringing about challenges for uncovering biological mechanisms using these models. In this work, we develop an interpretable multimodal fusion model to perform automated diagnosis and result interpretation simultaneously. We name it Grad-CAM guided convolutional collaborative learning (gCAM-CCL), which is achieved by combining intermediate feature maps with gradient-based weights. The gCAM-CCL model can generate interpretable activation maps to quantify pixel-level contributions of the input features. Moreover, the estimated activation maps are class-specific, which can therefore facilitate the identification of biomarkers underlying different groups. We validate the gCAM-CCL model on a brain imaging-genetic study, and demonstrate its applications to both the classification of cognitive function groups and the discovery of underlying biological mechanisms. Specifically, our analysis results suggest that during task-fMRI scans, several object recognition related regions of interests (ROIs) are activated followed by several downstream encoding ROIs. In addition, the high cognitive group may have stronger neurotransmission signaling while the low cognitive group may have problems in brain/neuron development due to genetic variations.
科研通智能强力驱动
Strongly Powered by AbleSci AI