计算机科学
运动(物理)
编码(集合论)
时态数据库
时差学习
动作识别
时间尺度
人工智能
可视化
动作(物理)
特征提取
模式识别(心理学)
数据挖掘
物理
强化学习
集合(抽象数据类型)
生物
程序设计语言
量子力学
班级(哲学)
生态学
作者
Limin Wang,Zhan Tong,Bin Ji,Gangshan Wu
标识
DOI:10.1109/cvpr46437.2021.00193
摘要
Temporal modeling still remains challenging for action recognition in videos. To mitigate this issue, this paper presents a new video architecture, termed as Temporal Difference Network (TDN), with a focus on capturing multi-scale temporal information for efficient action recognition. The core of our TDN is to devise an efficient temporal module (TDM) by explicitly leveraging a temporal difference operator, and systematically assess its effect on short-term and long-term motion modeling. To fully capture temporal information over the entire video, our TDN is established with a two-level difference modeling paradigm. Specifically, for local motion modeling, temporal difference over consecutive frames is used to supply 2D CNNs with finer motion pattern, while for global motion modeling, temporal difference across segments is incorporated to capture long-range structure for motion feature excitation. TDN provides a simple and principled temporal modeling framework and could be instantiated with the existing CNNs at a small extra computational cost. Our TDN presents a new state of the art on the Something-Something V1 & V2 datasets and is on par with the best performance on the Kinetics-400 dataset. In addition, we conduct in-depth ablation studies and plot the visualization results of our TDN, hopefully providing insightful analysis on temporal difference modeling. We release the code at https://github.com/MCG-NJU/TDN.
科研通智能强力驱动
Strongly Powered by AbleSci AI