Multi‐scale feature learning and temporal probing strategy for one‐stage temporal action localization

计算机科学 人工智能 模式识别(心理学) 水准点(测量) 联营 特征(语言学) 卷积神经网络 运动(物理) 弹道 计算机视觉 深度学习 分割 特征学习 物理 哲学 天文 语言学 地理 大地测量学
作者
Leiyue Yao,Wei Yang,Wei Huang,Nan Jiang,Bingbing Zhou
出处
期刊:International Journal of Intelligent Systems [Wiley]
卷期号:37 (7): 4092-4112 被引量:6
标识
DOI:10.1002/int.22713
摘要

The aim of temporal action localization (TAL) is to determine the start and end frames of an action in a video. In recent years, TAL has attracted considerable attention because of its increasing applications in video understanding and retrieval. However, precisely estimating the duration of an action in the temporal dimension is still a challenging problem. In this paper, we propose an effective one-stage TAL method based on a self-defined motion data structure, called a dense joint motion matrix (DJMM), and a novel temporal detection strategy. Our method provides three main contributions. First, compared with mainstream motion images, DJMMs can preserve more pre-processed motion features and provides more precise detail representations. Furthermore, DJMMs perfectly solve the temporal information loss problem caused by motion trajectory overlaps within a certain time period. Second, a spatial pyramid pooling (SPP) layer, which is widely used in the object detection and tracking fields, is innovatively incorporated into the proposed method for multi-scale feature learning. Moreover, the SPP layer enables the backbone convolutional neural network (CNN) to receive DJMMs of any size in the temporal dimension. Third, a large-scale-first temporal detection strategy inspired by a well-developed Chinese text segmentation algorithm is proposed to address long-duration videos. Our method is evaluated on two benchmark data sets and one self-collected data set: Florence-3D, UTKinect-Action3D and HanYue-3D. The experimental results show that our method achieves competitive action recognition accuracy and high TAL precision, and its time efficiency and few-shot learning capabilities enable it to be utilized for real-time surveillance.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Yonica完成签到,获得积分10
1秒前
橙子完成签到,获得积分10
1秒前
Gaojinyun完成签到,获得积分10
1秒前
gf完成签到,获得积分10
1秒前
谨慎的尔白完成签到,获得积分10
2秒前
王十三完成签到 ,获得积分10
3秒前
5秒前
7秒前
科研通AI6应助研友_LmAWYL采纳,获得10
7秒前
研友_VZG7GZ应助刘歌采纳,获得10
8秒前
yidi01完成签到,获得积分10
8秒前
9秒前
9秒前
此木本去一应助欣欣子采纳,获得10
10秒前
要减肥小夏完成签到 ,获得积分10
10秒前
高挑的雁风完成签到,获得积分10
11秒前
emanon发布了新的文献求助10
11秒前
唐星煜完成签到 ,获得积分10
12秒前
量子星尘发布了新的文献求助10
13秒前
xiaozy完成签到,获得积分10
13秒前
Stanley发布了新的文献求助10
14秒前
无机盐完成签到,获得积分10
14秒前
尘封雪完成签到,获得积分10
15秒前
15秒前
wtian1221应助Lynth_雪鸮采纳,获得10
15秒前
萝卜完成签到,获得积分10
15秒前
rafaam完成签到,获得积分10
16秒前
Jiaaaa完成签到,获得积分20
17秒前
17秒前
抱抱龙完成签到 ,获得积分10
19秒前
云墨完成签到 ,获得积分10
19秒前
20秒前
刘歌发布了新的文献求助10
20秒前
Stanley完成签到,获得积分20
21秒前
21秒前
YANICE发布了新的文献求助10
21秒前
21秒前
款款发布了新的文献求助10
22秒前
情怀应助emanon采纳,获得10
22秒前
吴筮完成签到,获得积分10
23秒前
高分求助中
Theoretical Modelling of Unbonded Flexible Pipe Cross-Sections 10000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
《药学类医疗服务价格项目立项指南(征求意见稿)》 880
花の香りの秘密―遺伝子情報から機能性まで 800
3rd Edition Group Dynamics in Exercise and Sport Psychology New Perspectives Edited By Mark R. Beauchamp, Mark Eys Copyright 2025 600
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
Digital and Social Media Marketing 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5620874
求助须知:如何正确求助?哪些是违规求助? 4705521
关于积分的说明 14932362
捐赠科研通 4763666
什么是DOI,文献DOI怎么找? 2551356
邀请新用户注册赠送积分活动 1513817
关于科研通互助平台的介绍 1474715