计算机科学
判别式
RGB颜色模型
动作识别
人工智能
图形
模式识别(心理学)
卷积神经网络
骨架(计算机编程)
人体骨骼
光学(聚焦)
理论计算机科学
光学
物理
程序设计语言
班级(哲学)
标识
DOI:10.1007/978-3-031-31435-3_10
摘要
In skeleton-based action recognition, graph convolutional networks (GCN) have been applied to extract features based on the dynamic of the human body and the method has achieved excellent results recently. However, GCN-based techniques only focus on the spatial correlations between human joints and often overlook the temporal relationships. In an action sequence, the consecutive frames in a neighborhood contain similar poses and using only temporal convolutions for extracting local features limits the flow of useful information into the calculations. In many cases, the discriminative features can present in long-range time steps and it is important to also consider them in the calculations to create stronger representations. We propose an attentional graph convolutional network, which adapts self-attention mechanisms to respectively model the correlations between human joints and between every time steps for skeleton-based action recognition. On two common datasets, the NTU-RGB+D60 and the NTU-RGB+D120, the proposed method achieved competitive classification results compared to state-of-the-art methods. The project’s GitHub page: STA-GCN .
科研通智能强力驱动
Strongly Powered by AbleSci AI