人工智能
计算机科学
计算机视觉
点云
点(几何)
深度学习
姿势
算法
数学
几何学
作者
Pengpeng Hu,Edmond S. L. Ho,Adrian Munteanu
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:72: 1-9
被引量:3
标识
DOI:10.1109/tim.2022.3222501
摘要
This article proposes a novel deep learning framework to generate omnidirectional 3-D point clouds of human bodies by registering the front- and back-facing partial scans captured by a single-depth camera. Our approach does not require calibration-assisting devices, canonical postures, nor does it make assumptions concerning an initial alignment or correspondences between the partial scans. This is achieved by factoring this challenging problem into: 1) building virtual correspondences for partial scans and 2) implicitly predicting the rigid transformation between the two partial scans via the predicted virtual correspondences. In this study, we regress the skinned multi-person linear model (SMPL) vertices from the two partial scans for building virtual correspondences. The main challenges are: 1) estimating the body shape and pose under clothing from single partially dressed body point clouds and 2) the predicted bodies from the front- and back-facing inputs required to be the same. We, thus, propose a novel deep neural network (DNN) dubbed AlignBodyNet that introduces shape-interrelated features and a shape-constraint loss for resolving this problem. We also provide a simple yet efficient method for generating real-world partial scans from complete models, which fills the gap in the lack of quantitative comparisons based on real-world data for various studies including partial registration, shape completion, and view synthesis. Experiments based on synthetic and real-world data show that our method achieves state-of-the-art performance in both objective and subjective terms.
科研通智能强力驱动
Strongly Powered by AbleSci AI