姿势
计算机科学
频域
人工智能
特征(语言学)
模式识别(心理学)
计算机视觉
哲学
语言学
作者
Zhenhua Tang,Yanbin Hao,Jia Li,Richang Hong
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-06-23
卷期号:34 (2): 911-923
被引量:4
标识
DOI:10.1109/tcsvt.2023.3286402
摘要
Capturing cross-pose correlation from a sequence of frame-level 2D poses is essential for 3D human pose estimation (3D-HPE) in the video. Recent studies have shown the promising potential of modeling the pose relation with feature-mixing operations on the temporal domain. However, they seldom consider the interaction across poses in the frequency domain. This paper studies a Frequency-Temporal Collaborative Module (FTCM) to explore the feasibility of encoding the cross-pose correlations in both frequency and temporal domains. FTCM aims to jointly capture the global and local cross-pose correlations with a more lightweight network model. Specifically, FTCM splits the pose features into two groups along the channel dimension and separately models the frequency and temporal interactions across poses with different feature-mixing operations in parallel. To achieve this goal, we purposely design two pose-mixing units, i.e., the frequency pose-mixing (FPM) and the temporal pose-mixing (TPM). Particularly, FPM is designed to reap the global correlations among different pose frequencies with the representation obtained by converting the original pose signals with Fast Fourier transform (FFT). Unlike the pose-mixing used by previous methods like Transformers that influences an individual pose with all other poses, TPM locally calibrates the pose with dynamics aggregated within several adjacent poses in the temporal domain, explicitly weighting neighboring poses more with respect to the far-away ones so as to enforce a strict locality constraint. Besides, the group strategy significantly reduces the model complexity. To verify the effectiveness of FTCM, we conduct extensive experiments on two benchmarks (i.e., Human3.6M and MPI-INF-3DHP). Experimental results not only exhibit favorable accuracy/complexity trade-offs of our FTCM but also show superior or comparable performance to state-of-the-art methods on both datasets. The code and model are publicly available at: https://github.com/zhenhuat/FTCM.
科研通智能强力驱动
Strongly Powered by AbleSci AI