手势
隐马尔可夫模型
手语
计算机科学
手势识别
语音识别
可穿戴计算机
人口
口语
人工智能
嵌入式系统
医学
语言学
环境卫生
哲学
作者
Vijayalakshmi Parthasarathy,T. Nagarajan,Jayapriya Ramesh,Brathindara Suresh,Krithika Kandasamy,N. Nikhilesh,Narenraju Nagarajan,S. Johanan Joysingh,Aiswarya Vijayakumar,Mrinalini Kannan
标识
DOI:10.1080/17483107.2021.2022787
摘要
Mild to profound hearing impairment places limits on effective communication and day-to-day interaction. Sign language, being the primary mode of communication for people with hearing loss, lacks communicative efficacy. A wearable assistive device that aims to convert sign language into speech is proposed to facilitate communication between the unimpaired population (untrained in sign language) and the hearing impaired population. However, the wide use of geo-centric sign languages in India has resulted in the lack of standardised sign-language datasets. In the proposed work, a compact, low-resource, motion sensor-based, wireless, single and double hand-gesture recognition module is designed to address this issue.The proposed module is designed to perform a two-step process with a Hidden Markov Model (HMM) based gesture-to-text conversion and a bilingual text-to-speech synthesis. Multi-threading based parallel processing is implemented to enable simultaneous working of the two systems to reduce the delay. In the proposed continuous-gesture recognition system, non-gesture hand motions are modelled using ergodic HMMs that are trained by concatenating all the states of gesture models, allowing equiprobable transitions. The proposed system is modelled and tested for American Sign Language (ASL) and user-defined gestures.The maximum performance of the proposed system in recognising single-handed and double-handed gestures in terms of F1-score is 98.17% and 84.85%, respectively. Further, the proposed system achieves a maximum F1-score of 98% and 83% in recognising isolated and continuous gestures, respectively.The gesture-to-speech conversion system is ported on Raspberry Pi making the proposed system wireless, and highly mobile.Implications for rehabilitationThe research work proposes to develop a gesture-to-speech conversion system to enable the deaf-mute population in communication. The major implications of the proposed work are:•A light-weight raspberry pi 3B + module hosts the entire hardware andsoftware, and is sufficient to train and test the gesture-to-speech conversion system, thereby ensuring greater mobility.•The proposed system can be customised to recognise user-defined gestures with just 5 examples of the new gesture.•The proposed system can be expanded to control home appliances (IoT applications) by combining the output of the proposed gesture recognition system with appropriate control interfaces.
科研通智能强力驱动
Strongly Powered by AbleSci AI