计算机科学
人工智能
模式识别(心理学)
水准点(测量)
联营
特征(语言学)
卷积神经网络
运动(物理)
弹道
计算机视觉
深度学习
分割
特征学习
物理
哲学
天文
语言学
地理
大地测量学
作者
Leiyue Yao,Wenzhu Yang,Wei Huang,Nan Jiang,Bo Zhou
摘要
The aim of temporal action localization (TAL) is to determine the start and end frames of an action in a video. In recent years, TAL has attracted considerable attention because of its increasing applications in video understanding and retrieval. However, precisely estimating the duration of an action in the temporal dimension is still a challenging problem. In this paper, we propose an effective one-stage TAL method based on a self-defined motion data structure, called a dense joint motion matrix (DJMM), and a novel temporal detection strategy. Our method provides three main contributions. First, compared with mainstream motion images, DJMMs can preserve more pre-processed motion features and provides more precise detail representations. Furthermore, DJMMs perfectly solve the temporal information loss problem caused by motion trajectory overlaps within a certain time period. Second, a spatial pyramid pooling (SPP) layer, which is widely used in the object detection and tracking fields, is innovatively incorporated into the proposed method for multi-scale feature learning. Moreover, the SPP layer enables the backbone convolutional neural network (CNN) to receive DJMMs of any size in the temporal dimension. Third, a large-scale-first temporal detection strategy inspired by a well-developed Chinese text segmentation algorithm is proposed to address long-duration videos. Our method is evaluated on two benchmark data sets and one self-collected data set: Florence-3D, UTKinect-Action3D and HanYue-3D. The experimental results show that our method achieves competitive action recognition accuracy and high TAL precision, and its time efficiency and few-shot learning capabilities enable it to be utilized for real-time surveillance.
科研通智能强力驱动
Strongly Powered by AbleSci AI