计算机科学
块(置换群论)
人工智能
计算
RGB颜色模型
模式识别(心理学)
卷积神经网络
动作识别
算法
几何学
数学
班级(哲学)
作者
Yichen Zhou,Ziyuan Huang,Xulei Yang,Marcelo H. Ang,Teck Khim Ng
标识
DOI:10.1016/j.patcog.2022.108970
摘要
In this work, we present an efficient and powerful building block for video action recognition, dubbed Glance and Combine Module (GCM). In order to obtain a broader perspective of the video features, GCM introduces an extra glancing operation with a larger receptive field over both the spatial and temporal dimensions, and combines features with different receptive fields for further processing. We show in our ablation studies that the proposed GCM is much more efficient than other forms of 3D spatio-temporal convolutional blocks. We build a series of GCM networks by stacking GCM repeatedly, and train them from scratch on the target datasets directly. On the Kinetics-400 dataset which focuses more on appearance rather than action, our GCM networks can achieve similar accuracy as others without pre-training on ImageNet. For the more action-centric recognition datasets such as Something-Something (V1 & V2) and Multi-Moments in Time, the GCM networks achieve state-of-the-art performance with less than two thirds the computational complexity of other models. With only 19.2 GFLOPs of computation, our GCMNet15 can obtain 63.9% top-1 classification accuracy on Something-Something-V2 validation set under single-crop testing. On the fine-grained action recognition dataset FineGym, we beat the previous state-of-the-art accuracy achieved with 2-stream methods by more than 6% using only RGB input.
科研通智能强力驱动
Strongly Powered by AbleSci AI