变压器
手势
机器人
计算机科学
人机交互
人工智能
手势识别
隐马尔可夫模型
计算机视觉
工程类
语音识别
电压
电气工程
作者
Yanhong Liu,Xingyu Li,Lei Yang,Hongnian Yu
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:73: 1-15
被引量:6
标识
DOI:10.1109/tim.2024.3373045
摘要
As one of the most direct and pivotal modes of human-computer interaction (HCI), the application of surface electromyography (sEMG) signals in the domain of gesture prediction has emerged as a prominent area of research. To enhance the performance of gesture prediction system based on multi-channel sEMG signals, a novel gesture prediction framework is proposed that (i) Conversion of original biological signals from multi-channel sEMG into two-dimensional time-frequency maps is achieved through the incorporation of continuous wavelet transform (CWT). (ii) For two-dimensional time-frequency map inputs, a Transformer-based classification network that effectively learns local and global context information is proposed, named DIFT-Net, with the goal of implementing sEMG-based gesture prediction for robot interaction. Proposed DIFT-Net employs a dual-branch interactive fusion structure based on the Swin Transformer, enabling effective acquisition of global contextual information and local details. Additionally, an attention guidance module (AGM) and an attentional interaction module (AIM) are proposed to guide network feature extraction and fusion processes in proposed DIFT-Net. The AGM module takes intermediate features from the same stage of both branches as input and guides the network to extract more localized and detailed features through convolutional attention. Meanwhile, the AIM module integrates output features from both branches to enhance the aggregation of global context information across various scales. To substantiate the efficacy of DIFT-Net, a multi-channel EMG bracelet is utilized to collect and construct an sEMG signal dataset. Experimental results demonstrate that the proposed DIFT-Net attains an accuracy of 98.36% in self-built dataset and 82.64% accuracy on the public Nanapro DB1 dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI