脑电图
运动表象
人工智能
卷积(计算机科学)
计算机科学
心理学
语音识别
模式识别(心理学)
神经科学
人工神经网络
脑-机接口
作者
Xingbin Shi,Baojiang Li,Wenlong Wang,Yuxin Qin,Haiyan Wang,Xichao Wang
出处
期刊:Neuroscience
[Elsevier]
日期:2024-08-03
卷期号:556: 42-51
被引量:1
标识
DOI:10.1016/j.neuroscience.2024.07.051
摘要
Brain-computer interface (BCI) is a technology that directly connects signals between the human brain and a computer or other external device. Motor imagery electroencephalographic (MI-EEG) signals are considered a promising paradigm for BCI systems, with a wide range of potential applications in medical rehabilitation, human-computer interaction, and virtual reality. Accurate decoding of MI-EEG signals poses a significant challenge due to issues related to the quality of the collected EEG data and subject variability. Therefore, developing an efficient MI-EEG decoding network is crucial and warrants research. This paper proposes a loss joint training model based on the vision transformer (VIT) and the temporal convolutional network (EEG-VTTCNet) to classify MI-EEG signals. To take advantage of multiple modules together, the EEG-VTTCNet adopts a shared convolution strategy and a dual-branching strategy. The dual-branching modules perform complementary learning and jointly train shared convolutional modules with better performance. We conducted experiments on the BCI Competition IV-2a and IV-2b datasets, and the proposed network outperformed the current state-of-the-art techniques with an accuracy of 84.58% and 90.94%, respectively, for the subject-dependent mode. In addition, we used t-SNE to visualize the features extracted by the proposed network, further demonstrating the effectiveness of the feature extraction framework. We also conducted extensive ablation and hyperparameter tuning experiments to construct a robust network architecture that can be well generalized.
科研通智能强力驱动
Strongly Powered by AbleSci AI