计算机科学
人体骨骼
接头(建筑物)
姿势
人工智能
骨架(计算机编程)
模式识别(心理学)
模棱两可
卷积神经网络
计算机视觉
能见度
光学
物理
工程类
建筑工程
程序设计语言
作者
Tianlang Chen,Fang Chen,Xiaohui Shen,Yiheng Zhu,Zhili Chen,Jiebo Luo
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2021-02-05
卷期号:32 (1): 198-209
被引量:144
标识
DOI:10.1109/tcsvt.2021.3057267
摘要
In this work, we propose a new solution to 3D human pose estimation in videos. Instead of directly regressing the 3D joint locations, we draw inspiration from the human skeleton anatomy and decompose the task into bone direction prediction and bone length prediction, from which the 3D joint locations can be completely derived. Our motivation is the fact that the bone lengths of a human skeleton remain consistent across time. This promotes us to develop effective techniques to utilize global information across all the frames in a video for high-accuracy bone length prediction. Moreover, for the bone direction prediction network, we propose a fully-convolutional propagating architecture with long skip connections. Essentially, it predicts the directions of different bones hierarchically without using any time-consuming memory units (e.g. LSTM). A novel joint shift loss is further introduced to bridge the training of the bone length and bone direction prediction networks. Finally, we employ an implicit attention mechanism to feed the 2D keypoint visibility scores into the model as extra guidance, which significantly mitigates the depth ambiguity in many challenging poses. Our full model outperforms the previous best results on Human3.6M and MPI-INF-3DHP datasets, where comprehensive evaluation validates the effectiveness of our model.
科研通智能强力驱动
Strongly Powered by AbleSci AI