脑-机接口
计算机科学
机械臂
异步通信
接口(物质)
工作区
抓住
人工智能
计算机视觉
机器人
人机交互
模拟
脑电图
最大气泡压力法
程序设计语言
并行计算
气泡
心理学
精神科
计算机网络
作者
Yajun Zhou,Tianyou Yu,Wei Gao,Weichen Huang,Zilin Lu,Qiyun Huang,Yuanqing Li
标识
DOI:10.1109/tnsre.2023.3299350
摘要
A brain-computer interface (BCI) can be used to translate neuronal activity into commands to control external devices. However, using noninvasive BCI to control a robotic arm for movements in three-dimensional (3D) environments and accomplish complicated daily tasks, such as grasping and drinking, remains a challenge.In this study, a shared robotic arm control system based on hybrid asynchronous BCI and computer vision was presented. The BCI model, which combines steady-state visual evoked potentials (SSVEPs) and blink-related electrooculography (EOG) signals, allows users to freely choose from fifteen commands in an asynchronous mode corresponding to robot actions in a 3D workspace and reach targets with a wide movement range, while computer vision can identify objects and assist a robotic arm in completing more precise tasks, such as grasping a target automatically.Ten subjects participated in the experiments and achieved an average accuracy of more than 92% and a high trajectory efficiency for robot movement. All subjects were able to perform the reach-grasp-drink tasks successfully using the proposed shared control method, with fewer error commands and shorter completion time than with direct BCI control.Our results demonstrated the feasibility and efficiency of generating practical multidimensional control of an intuitive robotic arm by merging hybrid asynchronous BCI and computer vision-based recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI