手势
手语
可穿戴计算机
符号(数学)
手势识别
计算机科学
美国手语
自然语言处理
翻译(生物学)
语音识别
机器翻译
人工智能
语言学
嵌入式系统
数学
哲学
数学分析
信使核糖核酸
化学
基因
生物化学
作者
Zhihao Zhou,Kyle Chen,Xiaoshi Li,Songlin Zhang,Yufen Wu,Yihao Zhou,Keyu Meng,Chenchen Sun,Qiang He,Wenjing Fan,Endong Fan,Zhiwei Lin,Xulong Tan,Weili Deng,Jin Yang,Jun Chen
标识
DOI:10.1038/s41928-020-0428-6
摘要
Signed languages are not as pervasive a conversational medium as spoken languages due to the history of institutional suppression of the former and the linguistic hegemony of the latter. This has led to a communication barrier between signers and non-signers that could be mitigated by technology-mediated approaches. Here, we show that a wearable sign-to-speech translation system, assisted by machine learning, can accurately translate the hand gestures of American Sign Language into speech. The wearable sign-to-speech translation system is composed of yarn-based stretchable sensor arrays and a wireless printed circuit board, and offers a high sensitivity and fast response time, allowing real-time translation of signs into spoken words to be performed. By analysing 660 acquired sign language hand gesture recognition patterns, we demonstrate a recognition rate of up to 98.63% and a recognition time of less than 1 s. Wearable yarn-based stretchable sensor arrays, combined with machine learning, can be used to translate American Sign Language into speech in real time.
科研通智能强力驱动
Strongly Powered by AbleSci AI