人工智能
计算机科学
人工神经网络
变压器
模式识别(心理学)
工程类
电压
电气工程
作者
Jianbo Chen,Yangsong Zhang,Yudong Pan,Peng Xu,Cuntai Guan
标识
DOI:10.1016/j.neunet.2023.04.045
摘要
Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signals in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data becomes urgent. In recent years, developing the methods that can work in inter-subject scenario has become a promising new direction. As a popular deep learning model nowadays, Transformer has been used in EEG signal classification tasks owing to its excellent performance. Therefore, in this study, we proposed a deep learning model for SSVEP classification based on Transformer architecture in inter-subject scenario, termed as SSVEPformer, which was the first application of Transformer on the SSVEP classification. Inspired by previous studies, we adopted the complex spectrum features of SSVEP data as the model input, which could enable the model to simultaneously explore the spectral and spatial information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) was proposed to improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12 targets; Dataset 2: 35 subjects, 40 targets). The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate than other baseline methods. The proposed models validate the feasibility of deep learning models based on Transformer architecture for SSVEP data classification, and could serve as potential models to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI