Multi‐scale feature learning and temporal probing strategy for one‐stage temporal action localization

计算机科学 人工智能 模式识别(心理学) 水准点(测量) 联营 特征(语言学) 卷积神经网络 运动(物理) 弹道 计算机视觉 深度学习 分割 特征学习 物理 哲学 天文 语言学 地理 大地测量学
作者
Leiyue Yao,Wei Yang,Wei Huang,Nan Jiang,Bingbing Zhou
出处
期刊:International Journal of Intelligent Systems [Wiley]
卷期号:37 (7): 4092-4112 被引量:4
标识
DOI:10.1002/int.22713
摘要

The aim of temporal action localization (TAL) is to determine the start and end frames of an action in a video. In recent years, TAL has attracted considerable attention because of its increasing applications in video understanding and retrieval. However, precisely estimating the duration of an action in the temporal dimension is still a challenging problem. In this paper, we propose an effective one-stage TAL method based on a self-defined motion data structure, called a dense joint motion matrix (DJMM), and a novel temporal detection strategy. Our method provides three main contributions. First, compared with mainstream motion images, DJMMs can preserve more pre-processed motion features and provides more precise detail representations. Furthermore, DJMMs perfectly solve the temporal information loss problem caused by motion trajectory overlaps within a certain time period. Second, a spatial pyramid pooling (SPP) layer, which is widely used in the object detection and tracking fields, is innovatively incorporated into the proposed method for multi-scale feature learning. Moreover, the SPP layer enables the backbone convolutional neural network (CNN) to receive DJMMs of any size in the temporal dimension. Third, a large-scale-first temporal detection strategy inspired by a well-developed Chinese text segmentation algorithm is proposed to address long-duration videos. Our method is evaluated on two benchmark data sets and one self-collected data set: Florence-3D, UTKinect-Action3D and HanYue-3D. The experimental results show that our method achieves competitive action recognition accuracy and high TAL precision, and its time efficiency and few-shot learning capabilities enable it to be utilized for real-time surveillance.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
香蕉觅云应助酷炫的一笑采纳,获得10
刚刚
Lin琳发布了新的文献求助10
刚刚
1秒前
1秒前
Coco完成签到,获得积分10
2秒前
可爱的函函应助yy采纳,获得10
3秒前
xjx发布了新的文献求助10
3秒前
最佳完成签到 ,获得积分10
3秒前
英姑应助冬季去看雨采纳,获得10
4秒前
马玲完成签到,获得积分10
4秒前
4秒前
pp‘s完成签到,获得积分10
4秒前
lsc发布了新的文献求助10
5秒前
6秒前
重要达发布了新的文献求助10
6秒前
zfm发布了新的文献求助10
6秒前
xjcy给zj的求助进行了留言
6秒前
zhuzihao发布了新的文献求助10
6秒前
xjcy应助落后爆米花采纳,获得10
6秒前
自然的箴发布了新的文献求助10
6秒前
LLL发布了新的文献求助10
7秒前
7秒前
央央完成签到,获得积分10
7秒前
沈雨琦应助youjun采纳,获得10
8秒前
白小泽完成签到,获得积分10
9秒前
科研通AI6应助马玲采纳,获得10
9秒前
白桃发布了新的文献求助10
10秒前
10秒前
vfvv完成签到,获得积分10
11秒前
yff发布了新的文献求助10
11秒前
11秒前
上官若男应助xjx采纳,获得50
11秒前
13秒前
陶醉信封发布了新的文献求助10
13秒前
饭神仙鱼完成签到,获得积分10
13秒前
13秒前
14秒前
14秒前
思源应助神勇的天问采纳,获得10
14秒前
煜琪完成签到 ,获得积分10
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Manipulating the Mouse Embryo: A Laboratory Manual, Fourth Edition 1000
计划经济时代的工厂管理与工人状况(1949-1966)——以郑州市国营工厂为例 500
Comparison of spinal anesthesia and general anesthesia in total hip and total knee arthroplasty: a meta-analysis and systematic review 500
INQUIRY-BASED PEDAGOGY TO SUPPORT STEM LEARNING AND 21ST CENTURY SKILLS: PREPARING NEW TEACHERS TO IMPLEMENT PROJECT AND PROBLEM-BASED LEARNING 500
Two New β-Class Milbemycins from Streptomyces bingchenggensis: Fermentation, Isolation, Structure Elucidation and Biological Properties 300
Modern Britain, 1750 to the Present (第2版) 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 催化作用 遗传学 冶金 电极 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 4585432
求助须知:如何正确求助?哪些是违规求助? 4002122
关于积分的说明 12389406
捐赠科研通 3678232
什么是DOI,文献DOI怎么找? 2027162
邀请新用户注册赠送积分活动 1060707
科研通“疑难数据库(出版商)”最低求助积分说明 947227