人工智能
计算机科学
卷积神经网络
计算机视觉
姿势
RGB颜色模型
特征(语言学)
模式识别(心理学)
特征提取
单眼
语言学
哲学
作者
Shaoxiang Guo,Eric Rigall,Yakun Ju,Junyu Dong
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-08-01
卷期号:32 (8): 5293-5306
被引量:3
标识
DOI:10.1109/tcsvt.2022.3142787
摘要
3D hand pose estimation from a monocular RGB image is a highly challenging task due to self-occlusion, diverse appearances, and inherent depth ambiguities within monocular images. Most of the previous methods first employ deep neural networks to fit 2D joint location maps, then combines them with implicit or explicit pose-aware features to directly regress 3D hand joints positions using their designed network structure. However, the skeleton positions and corresponding skeleton-aware content information located in the latent space are invariably ignored. These skeleton-aware contents effectively bridge the gap between hand joint and hand skeleton information by associating the relationship between different hand joints features and the hand skeleton positions distribution in 2D space. To address this issue, we propose a simple yet efficient deep neural network to directly recover reliable 3D hand pose from monocular RGB images with faster estimation process. Our purpose is the reduction of the model computational complexity while maintaining high precision performance. Therefore, we design a novel Feature Chat Block (FCB) to complete feature boosting, which enables the intuitively enhanced interaction between joint and skeleton features. First, this FCB module updates joint features effectively based on semantic graph convolutional neural network and multi-head self-attention mechanism. The GCN-based structure focuses on the physical hand joints included in a binary adjacency matrix and the self-attention part pays attention to hand joints located in a complementary matrix. Then, the FCB module employs query and key mechanisms respectively representing joint and skeleton features to further implement feature interaction. After a set of FCB modules, our model updates the fused features in a coarse-to-fine manner and finally outputs a predicted 3D hand pose. We conducted a comprehensive set of ablation experiments on the InterHand2.6M dataset to validate the effectiveness and significance of the proposed method. Additionally, experimental results on Rendered Hand Dataset, Stereo Hand Datasets, First-Person Hand Action Dataset and FreiHAND Dataset show our model surpasses the state-of-the-art methods with faster inference speed.
科研通智能强力驱动
Strongly Powered by AbleSci AI