手势
计算机科学
可穿戴计算机
手势识别
运动(物理)
代表(政治)
人工智能
光学(聚焦)
背景(考古学)
运动捕捉
可穿戴技术
人机交互
计算机视觉
钥匙(锁)
嵌入式系统
古生物学
物理
光学
政治
法学
生物
计算机安全
政治学
作者
Shu Wang,Aiguo Wang,Mengyuan Ran,Li Liu,Yuxin Peng,Ming Liu,Guoxin Su,Adi Alhudhaif,Fayadh Alenezi,Norah Alnaim
标识
DOI:10.1016/j.ins.2022.05.085
摘要
The primary goal of hand gesture recognition with wearables is to facilitate the realization of gestural user interfaces in mobile and ubiquitous environments. A key challenge in wearable-based hand gesture recognition is the fact that a hand gesture can be performed in several ways, with each consisting of its own configuration of motions and their spatio-temporal dependencies. However, the existing methods generally focus on the characteristics of a single point on hand, but ignores the diversity of motion information over hand skeleton, and as a result, they suffer from two key challenges to characterize hand gestures over multiple wearable sensors: motion representation and motion modeling. This leads us to define a spatio-temporal framework, named STGauntlet, that explicitly characterizes the hand motion context of spatio-temporal relations among multiple bones and detects hand gestures in real-time. In particular, our framework incorporates Lie group-based representation to capture the inherent structural varieties of hand motions with spatio-temporal dependencies among multiple bones. To evaluate our framework, we developed a hand-worn prototype device with multiple motion sensors. Our in-lab study on a dataset collected from nine subjects suggests our approach significantly outperforms the state-of-the-art methods with the achievement of 98.2% and 95.6% average accuracies for subject dependent and independent gesture recognition, respectively. Specifically, we also show in-wild applications that highlight the interaction capability of our framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI