手语
运动(音乐)
计算机科学
符号(数学)
编码(内存)
语音识别
模糊逻辑
人工智能
计算机视觉
语言学
数学
声学
数学分析
哲学
物理
作者
Caise Wei,Shiqiang Liu,Jinfeng Yuan,Rong Zhu
出处
期刊:InfoMat
[Wiley]
日期:2024-11-18
摘要
Abstract Wearable sign language recognition helps hearing/speech impaired people communicate with non‐signers. However current technologies still unsatisfy practical uses due to the limitations of sensing and decoding capabilities. Here, A continuous sign language recognition system is proposed with multimodal hand/finger movement sensing and fuzzy encoding, trained with small word‐level samples from one user, but applicable to sentence‐level language recognition for new untrained users, achieving data‐efficient universal recognition. A stretchable fabric strain sensor is developed by printing conductive poly(3,4‐ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) ink on a pre‐stretched fabric wrapping rubber band, allowing the strain sensor with superior performances of wide sensing range, high sensitivity, good linearity, fast dynamic response, low hysteresis, and good long‐term reliability. A flexible e‐skin with a homemade micro‐flow sensor array is further developed to accurately capture three‐dimensional hand movements. Benefitting from fabric strain sensors for finger movement sensing, micro‐flow sensor array for 3D hand movement sensing, and human‐inspired fuzzy encoding for semantic comprehension, sign language is captured accurately without the interferences from individual action differences. Experiment results show that the semantic comprehension accuracy reaches 99.7% and 95%, respectively, in recognizing 100 isolated words and 50 sentences for a trained user, and achieves 80% in recognizing 50 sentences for new untrained users. image
科研通智能强力驱动
Strongly Powered by AbleSci AI