计算机科学
变压器
阿尔茨海默病
疾病
人工智能
医学
工程类
电气工程
内科学
电压
作者
Qi Yu,Qian Ma,Lazica Da,Jiahui Li,Mengying Wang,Andi Xu,Z. B. Li,Wenyuan Li
标识
DOI:10.1016/j.compbiomed.2024.108979
摘要
In Alzheimer's disease (AD) assessment, traditional deep learning approaches have often employed separate methodologies to handle the diverse modalities of input data. Recognizing the critical need for a cohesive and interconnected analytical framework, we propose the AD-Transformer, a novel transformer-based unified deep learning model. This innovative framework seamlessly integrates structural magnetic resonance imaging (sMRI), clinical, and genetic data from the extensive Alzheimer's Disease Neuroimaging Initiative (ADNI) database, encompassing 1651 subjects. By employing a Patch-CNN block, the AD-Transformer efficiently transforms image data into image tokens, while a linear projection layer adeptly converts non-image data into corresponding tokens. As the core, a transformer block learns comprehensive representations of the input data, capturing the intricate interplay between modalities. The AD-Transformer sets a new benchmark in AD diagnosis and Mild Cognitive Impairment (MCI) conversion prediction, achieving remarkable average area under curve (AUC) values of 0.993 and 0.845, respectively, surpassing those of traditional image-only models and non-unified multimodal models. Our experimental results confirmed the potential of the AD-Transformer as a potent tool in AD diagnosis and MCI conversion prediction. By providing a unified framework that jointly learns holistic representations of both image and non-image data, the AD-Transformer paves the way for more effective and precise clinical assessments, offering a clinically adaptable strategy for leveraging diverse data modalities in the battle against AD.
科研通智能强力驱动
Strongly Powered by AbleSci AI