计算机科学
RGB颜色模型
人工智能
计算机视觉
卷积神经网络
杠杆(统计)
深度学习
模式识别(心理学)
作者
Cheng Qin,Jun Cheng,Zhen Liu,Ziliang Ren,Jianming Liu
标识
DOI:10.1016/j.eswa.2023.123061
摘要
The vulnerability of RGB-based human action recognition in complex environment and variational scenes can be compensated by skeleton modality. Therefore, action recognition methods fusing RGB and skeleton modalities have received increasing attention. However, the recognition performance of the existing methods is still not satisfactory due to the insufficiently optimized sampling, modeling and fusion strategy, even the computational cost is heavy. In this paper, we propose a Dense-Sparse Complementary Network (DSCNet), which aims to leverage the complementary information of the RGB and skeleton modalities at light computational cost to obtain the competitive action recognition performance. Specifically, we first adopt dense and sparse sampling strategies according to the advantages of RGB and skeleton modalities, respectively. And then, we use the skeleton as guiding information to crop the key active region of the persons in the RGB frame, which largely eliminates the interference of the background. Moreover, a Short-Term Motion Extraction Module (STMEM) is proposed to compress the densely sampled RGB frames to fewer frames before feeding them into the backbone network, which avoids a surge in computational cost. And a Sparse Multi-Scale Spatial–Temporal convolutional neural Network (Sparse-MSSTNet) is designed to modeling sparse skeleton. Extensive experiments show that our method effectively combines complementary information of RGB and skeleton modalities to improve recognition accuracy. The DSCNet achieves competitive performance on NTU RGB+D 60, NTU RGB+D 120, PKU-MMD, UAV-human, IKEA ASM and Northwest-UCLA datasets with much less computational cost than exiting methods. The code is available at https://github.com/Maxchengqin/DSCNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI