Yu Zhang,Penghai Li,Longlong Cheng,Mingji Li,Hongji Li
出处
期刊:IEEE Transactions on Consumer Electronics [Institute of Electrical and Electronics Engineers] 日期:2023-11-07卷期号:70 (1): 2423-2434被引量:3
标识
DOI:10.1109/tce.2023.3330423
摘要
Motor imagery (MI) electroencephalography (EEG) has been used in consumer products supported by brain-computer interfaces (BCI), with existing electronics covering a wide range of domains from artificial intelligence (AI) to the Internet of Things (IoT). However, the limitation in decoding MI-EEG signals has restricted the further development of the related Consumer Electronics (CE) industry. To address this problem, this paper proposes an attention-based multiscale spatial-temporal convolu-tional network (AMSTCNet). First, a multi-branch structure is designed to extract high-dimensional spatial-temporal represent-tations at different scales. Second, Squeeze-Excite-Compress (SEC) blocks are proposed to highlight feature responses within a single scale and weighted to fuse these features to reduce information redundancy. Finally, the attention-based temporal convolutional network is used to obtain deep temporal information of the signal to dynamically fuse features at different scales. In addition, the AMSTCNet model is an end-to-end decoder using raw EEG signals as input. We evaluated the decoding performance of the AMSTCNet model using the BCI IV 2a dataset and the High Gamma dataset, and achieved recognition accuracies of 87.55% and 96.35%, respectively. Compared with existing methods, our method achieves satisfactory decoding performance and can greatly facilitate the application of BCI technology in CE.