运动表象
计算机科学
脑电图
图形
卷积神经网络
人工智能
稳健性(进化)
模式识别(心理学)
特征提取
深度学习
机器学习
心理学
脑-机接口
理论计算机科学
生物化学
化学
精神科
基因
作者
Weifeng Ma,Chuanlai Wang,Xiang Sun,Xuefen Lin,Yuchen Wang
标识
DOI:10.1016/j.bspc.2023.104684
摘要
The emergence of deep learning methods has driven the widespread use of brain–machine interface motor imagery classification in machine control and medical rehabilitation, and has achieved classification accuracy superior to those of traditional machine learning methods. However, models trained using current mainstream deep learning methods show a maximum variation in accuracy of over 20% when using data from different subjects in the same dataset for classification. The large variation indicates the weak robustness of such models and the difficulties in feature extraction for some subjects. As motor imagery classification is aimed at individual users, it is not conducive to the diffusion of the technique if the results vary too much from one user to another. In our research, we have found the accuracy differences between different subjects are caused by the data differ in spatial characteristics and training difficulty. Therefore, exploring the differences between different subjects’ data and weakening these differences can reduce the accuracy gap between subjects and ensure that the model can have good classification accuracy for each subject. We call this operation of reducing the accuracy gap individual differences weakening. To implement this operation, we propose a Double-branch Graph Convolutional Attention Neural Network (DGCAN), which uses a graph neural network to filter channels that are less disturbed by spatial location factors, and uses spatial–temporal domain convolution to focus on extracting features contained in the filtered channels, weakening the influence of spatial features contributes to individual differences weakening. We also design a loss function, EegLoss, which focuses on training hard samples and can effectively reduce the model-insensitive data contained in different subjects. We test model performance on the BCI Competition IV datasets 2a and 2b, achieving accuracies of 84% and 86%. We also compare the accuracy gap between subjects, showing that our model is effective in reducing the accuracy gap between subjects and has higher robustness than previous models.
科研通智能强力驱动
Strongly Powered by AbleSci AI