作者
Nianming Ban,Shanghong Xie,Chao Qu,Xuening Chen,Jiahui Pan
摘要
To address the issues of low control accuracy, insufficient command quantity, and limited machine functionality in brain-machine interfaces (BMIs), we propose a multifunctional robot control system based on a multimodal BMI that fuses three different modalities of signals: SSVEP, EOG, and gyroscope. The system enables control of the robot to perform ten actions, including moving forward, turning left, turning right, stopping, gripping, lifting and lowering the left arm, clockwise and counterclockwise rotation of the left arm elbow and searching and grabbing the ball. Additionally, a new SSVEP paradigm with a two-level menu is designed to allow subjects to switch between different control menus by double blinking, providing sufficient commands with fewer stimulation blocks. In the SSVEP classification experiment, we propose a CNN-BiLSTM network based on the attention module (ACB-Net), which can make the network automatically weight according to the importance of the EEG signals of different channels, resulting in better feature extraction. To demonstrate the superiority of our model, we conducted classification experiments on a public dataset and self-collected dataset with six other SSVEP classification methods, and our model achieved the highest accuracy. In the online experiment, all 16 subjects completed complex tasks, with an average accuracy rate of 93.78% and an average ITR of 93.75 bit/min. Furthermore, we enhanced the robot's functionality by adding visual capabilities, making the control more intelligent. Overall, our proposed system demonstrates precise control over the Nao robot and holds significant potential for applications in both the medical and robotics control domains.