超图
计算机科学
人工智能
RGB颜色模型
系列(地层学)
人工神经网络
循环神经网络
动作(物理)
模式识别(心理学)
对象(语法)
序列(生物学)
计算机视觉
数学
古生物学
离散数学
生物
物理
遗传学
量子力学
作者
Nan Ma,Zhixuan Wu,Yifan Feng,Cheng Wang,Yue Gao
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:33: 3301-3313
被引量:3
标识
DOI:10.1109/tip.2024.3391913
摘要
Recently, action recognition has attracted considerable attention in the field of computer vision. In dynamic circumstances and complicated backgrounds, there are some problems, such as object occlusion, insufficient light, and weak correlation of human body joints, resulting in skeleton-based human action recognition accuracy being very low. To address this issue, we propose a Multi-View Time-Series Hypergraph Neural Network (MV-TSHGNN) method. The framework is composed of two main parts: the construction of a multi-view time-series hypergraph structure and the learning process of multi-view time-series hypergraph convolutions. Specifically, given the multi-view video sequence frames, we first extract the joint features of actions from different views. Then, limb components and adjacent joints spatial hypergraphs based on the joints of different views at the same time are constructed respectively, temporal hypergraphs are constructed joints of the same view at continuous times, which are established high-order semantic relationships and cooperatively generate complementary action features. After that, we design a multi-view time-series hypergraph neural network to efficiently learn the features of spatial and temporal hypergraphs, and effectively improve the accuracy of skeleton-based action recognition. To evaluate the effectiveness and efficiency of MV-TSHGNN, we conduct experiments on NTU RGB+D, NTU RGB+D 120 and imitating traffic police gestures datasets. The experimental results indicate that our proposed method model achieves the new state-of-the-art performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI