自编码
脑电图
变压器
计算机科学
人工智能
模式识别(心理学)
语音识别
训练集
机器学习
心理学
人工神经网络
工程类
神经科学
电气工程
电压
标识
DOI:10.1016/j.bspc.2024.106131
摘要
Convolutional neural networks (CNN) may not be ideal for extracting global temporal features from non-stationary Electroencephalogram (EEG) signals. The application of the masking-based method in EEG classification is not well studied, and there is a shortage of commonly accepted models for verifying inter-individual results in motor imagery classification tasks. The MAE-EEG-Transformer, a transformer with masking mechanism, is proposed in this article. It pre-trains by randomly masking signals and forces the model to learn semantic features. The pre-trained encoder module is fine-tuned and moved to the classification task to obtain the category of EEG signals. The effectiveness of features with and without pre-training is compared using t-SNE visualization to demonstrate pre-training's inter-subject efficacy. The MAE EEG Transformer was extensively evaluated across three prevalent datasets in EEG-based motor imagery, demonstrating performance comparable to state-of-the-art models while requiring only approximately 20% of the computational cost (results in Table 1, 2, 3 and 4).
科研通智能强力驱动
Strongly Powered by AbleSci AI