多边形网格
计算机科学
模态(人机交互)
人工智能
轮廓
计算机视觉
计算机图形学(图像)
作者
Han Ding,Zhenbin Chen,Cui Zhao,Fei Wang,Ge Wang,Wei Xi,Jizhong Zhao
出处
期刊:Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies
[Association for Computing Machinery]
日期:2023-03-27
卷期号:7 (1): 1-24
被引量:7
摘要
Estimating 3D human mesh is appealing for various application scenarios. Current mainstream solution predicts the meshes either from the image or the human reflected RF-signals. In this paper, instead of investigating which approach is better, we propose to design a multi-modality fusion framework, namely MI-Mesh, which estimates 3D meshes by fusing image and mmWave. To realize this, we design a deep neural network model. It first automatically correlate mmWave point clouds to certain human joints and extracts useful fused features from two modalities. Then, the features are refined by predicting 2D joints and silhouette. Finally, we regress pose and shape parameters and feed them to SMPL model to generate the 3D human meshes. We build a prototype on commercial mmWave radar and camera. The experimental results demonstrate that with the integration of multi-modality strengths, MI-Mesh can effectively recover human meshes on dynamic motions and across different conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI