Dual-3DM3AD: Mixed Transformer Based Semantic Segmentation and Triplet Pre-Processing for Early Multi-Class Alzheimer’s Diagnosis
计算机科学
人工智能
Softmax函数
模式识别(心理学)
分割
卷积神经网络
作者
Arfat Ahmad Khan,Rakesh Kumar Mahendran,P Kumar,Muhammad Faheem
出处
期刊:IEEE Transactions on Neural Systems and Rehabilitation Engineering [Institute of Electrical and Electronics Engineers] 日期:2024-01-01卷期号:32: 696-707被引量:20
Alzheimer's Disease (AD) is a widespread, chronic, irreversible, and degenerative condition, and its early detection during the prodromal stage is of utmost importance. Typically, AD studies rely on single data modalities, such as MRI or PET, for making predictions. Nevertheless, combining metabolic and structural data can offer a comprehensive perspective on AD staging analysis. To address this goal, this paper introduces an innovative multi-modal fusion-based approach named as Dual-3DM 3 -AD. This model is proposed for an accurate and early Alzheimer's diagnosis by considering both MRI and PET image scans. Initially, we pre-process both images in terms of noise reduction, skull stripping and 3D image conversion using Quaternion Non-local Means Denoising Algorithm (QNLM), Morphology function and Block Divider Model (BDM), respectively, which enhances the image quality. Furthermore, we have adapted Mixed-transformer with Furthered U-Net for performing semantic segmentation and minimizing complexity. Dual-3DM 3 -AD model is consisted of multi-scale feature extraction module for extracting appropriate features from both segmented images. The extracted features are then aggregated using Densely Connected Feature Aggregator Module (DCFAM) to utilize both features. Finally, a multi-head attention mechanism is adapted for feature dimensionality reduction, and then the softmax layer is applied for multi-class Alzheimer's diagnosis. The proposed Dual-3DM 3 -AD model is compared with several baseline approaches with the help of several performance metrics. The final results unveil that the proposed work achieves 98% of accuracy, 97.8% of sensitivity, 97.5% of specificity, 98.2% of f-measure, and better ROC curves, which outperforms other existing models in multiclass Alzheimer's diagnosis.