摘要
Alzheimer's disease (AD) is a complex neurodegenerative condition that affects millions of people worldwide, necessitating early and accurate diagnosis for optimal patient care. This study presents a novel Two-Level Multimodal Data Fusion Integrated Incremental Learner Ensemble Classifier (TMDFILE) for Alzheimer's detection. This method integrates temporal, spatial, spectral, audio, and text data modalities, utilising a gating mechanism to optimise the contribution of each modality. Incremental learning is employed to adjust evolving data patterns and, enhance long-term performance. The proposed TMDFILE was evaluated across five diverse datasets: achieving an accuracy of 94.5%, precision of 93.5%, recall of 95.1%, and F-measure of 94.1% on the ADNI dataset; an accuracy of 94.9%, with precision, recall, and F-measure values of 94.5%, 94.1%, and 94.3%, respectively, on the OASIS dataset; an accuracy of 93.5%, precision of 95.1%, and recall of 94.1% on the EEG Emotion Recognition dataset; an accuracy of 94.5%, precision of 93.5%, and recall of 95.1% on the Aberystwyth Dementia dataset, providing reliable classifications that contribute to early cognitive decline detection; and showed robust performance with an accuracy of 94.5%, precision of 93.5%, and recall of 95.1% on the BRATS dataset, relevant to brain imaging analysis for Alzheimer's detection. TMDFILE consistently outperformed traditional classifiers, including Support Vector Machines, Random Forest, and Convolutional Neural Networks, achieving an average precision of 93.5%, recall of 95.1%, F-measure of 94.1%, and accuracy of 94.5%. These findings underscore TMDFILE's effectiveness in diagnostic accuracy and reliability, establishing it as a promising tool for Alzheimer's disease detection across clinical and research applications.