计算机科学
卷积神经网络
人工智能
神经影像学
模式识别(心理学)
参数化复杂度
深度学习
特征提取
变压器
机器学习
神经科学
算法
生物
物理
量子力学
电压
作者
Rahma Kadri,Bassem Bouaziz,Mohamed Tmar,Faı̈ez Gargouri
标识
DOI:10.1016/j.dsp.2023.104229
摘要
Convolutional neural networks (CNNs) have been widely used in medical imaging applications, including brain diseases such as Alzheimer's disease (AD) classification based on neuroimaging data. Researchers extract the potential brain regions related to AD disease using CNN from various imaging modalities due to its architectural inductive bias. The major limitation of the current CNN-based model is that it doesn't capture long-range relationships and long-distance correlation within the image features. Vision transformers (ViT) have proven an astounding performance in encoding long-range relationships with strong modeling capacity and global feature extraction due to the self attention mechanism. However, ViT doesn't model the spatial information or the local features within the image and is hard to train. Researchers have demonstrated that combining CNN and a transformer yields outstanding results. In this study, two new methods are proposed for Alzheimer's disease diagnosis. The first method combines the Swin transformer with an enhanced EfficientNet with multi-head attention and a Depthwise Over-Parameterized Convolutional Layer (DO-Conv). The second method consists of modifying the CoAtNet network with ECA-Net and fused inverted residuals blocks. We evaluated the effectiveness of our proposed methods based on the Open Access Series of Imaging Studies (OASIS) and the Alzheimer's Disease Neuroimaging Initiative (ADNI). Further, we evaluated the proposed methods using the Gradient-based Localization (Grad-CAM) method. The first method achieved 93.23% accuracy of classification on the OASIS dataset. The second method achieved 97.33% accuracy of classification on the OASIS dataset. We applied different multimodal image fusion methods (MRI and PET, MRI and CT) using our proposed method. The experimental results demonstrate that the fusion method based on PET and MRI outperforms the fusion method based on MRI and CT achieving 99.42% accuracy. Our methods outperform some traditional CNN models and the recent methods that are based on transformer for AD classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI