天王星
比例(比率)
计算机科学
雷达
手势
遥感
地质学
人工智能
电信
地理
物理
地图学
天文
行星
作者
Yue Ling,Dong Zhao,Kaikai Deng,Kangwen Yin,Wenxin Zheng,Huadóng Ma
出处
期刊:Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies
[Association for Computing Machinery]
日期:2024-11-21
卷期号:8 (4): 1-28
摘要
Millimeter-wave radar shows great sensing capabilities for pervasive and privacy-preserving gesture recognition. However, the lack of large-scale, dynamic radar datasets hinders the advancement of deep learning models for generalized gesture recognition in dynamic scenes. To address this problem, we opt for designing a system that employs wealthy dynamic 2D videos to generate realistic radar data, but it confronts two challenges including i) simulating the complex signal reflection characteristics of humans and the background and ii) extracting elusive gesture-relevant features from dynamic radar data. To this end, we design Uranus with two key components: (i) a dynamic data generation network (DDG-Net) combines several key modules, human reflection model, background reflection extractor, and data fitting model to simulate the signal reflection characteristics of humans and the background, followed by fitting the number and global distribution of points in point clouds to generate realistic radar data; (ii) a dynamic gesture recognition network (DGR-Net) combines two modules, spatial feature extraction and global feature fusion, to extract spatial and global features of points in point clouds, respectively, to achieve generalized gesture recognition. We implement and evaluate Uranus with dynamic video data from public video sources and self-collected radar data, demonstrating that Uranus outperforms the state-of-the-art approaches for gesture recognition in dynamic scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI