计算机科学
人工智能
肌电图
传感器融合
融合
模式识别(心理学)
帧(网络)
理论(学习稳定性)
运动(物理)
特征(语言学)
机器学习
物理医学与康复
医学
电信
语言学
哲学
作者
Bin Zhou,Naishi Feng,Hong Wang,Yanzheng Lu,Chunfeng Wei,Daqi Jiang,Ziyang Li
出处
期刊:Journal of Neural Engineering
[IOP Publishing]
日期:2022-08-01
卷期号:19 (4): 046051-046051
被引量:2
标识
DOI:10.1088/1741-2552/ac89b4
摘要
Objective.Recent technological advances show the feasibility of fusing surface electromyography (sEMG) signals and movement data to predict lower limb ambulation intentions. However, since the invasive fusion of different signals is a major impediment to improving predictive performance, searching for a non-invasive (NI) fusion mechanism for lower limb ambulation pattern recognition based on different modal features is crucial.Approach. We propose an end-to-end sequence prediction model with NI dual attention temporal convolutional networks (NIDA-TCNs) as a core to elegantly address the essential deficiencies of traditional decision models with heterogeneous signal fusion. Notably, the NIDA-TCN is a weighted fusion of sEMG and inertial measurement units with time-dependent effective hidden information in the temporal and channel dimensions using TCN and self-attentive mechanisms. The new model can better discriminate between walking, jumping, downstairs, and upstairs four lower limb activities of daily living.Main results. The results of this study show that the NIDA-TCN models produce predictions that significantly outperform both frame-wise and TCN models in terms of accuracy, sensitivity, precision, F1 score, and stability. Particularly, the NIDA-TCN with sequence decision fusion (NIDA-TCN-SDF) models, have maximum accuracy and stability increments of 3.37% and 4.95% relative to the frame-wise model, respectively, without manual feature-encoding and complex model parameters.Significance. It is concluded that the results demonstrate the validity and feasibility of the NIDA-TCN-SDF models to ensure the prediction of daily lower limb ambulation activities, paving the way to the development of fused heterogeneous signal decoding with better prediction performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI