分割
人工智能
计算机科学
运动捕捉
稳健性(进化)
计算机视觉
尺度空间分割
水准点(测量)
运动(物理)
图像分割
模式识别(心理学)
图形
地理
生物化学
化学
大地测量学
理论计算机科学
基因
标识
DOI:10.1109/icassp48485.2024.10446553
摘要
Human motion sequence segmentation plays a crucial role in understanding and applying human motion capture(MoCap) sequences. However, most of the traditional segmentation methods are designed to find the locations where the motion features have changed significantly. When dealing with complex motion scenes, such methods often lead to inefficiency, inaccuracy, and limitations. To address these challenges, we propose an end-to-end sequence segmentation method based on the Spatial Temporal Graph Convolutional Networks(ST-GCN). Our network effectively extracts motion features from MoCap sequences, reduces dimensions through convolutional operations, and identifies segmentation points between different motions. Under the constraints of excessive segmentation and clip length, the optimal segmentation is achieved by combining three carefully designed loss functions. The proposed framework was evaluated on two benchmark datasets, CMU MoCap database and HDM05 dataset, and achieved better accuracy and robustness compared with existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI