计算机科学
建筑
人工智能
国家(计算机科学)
变压器
人机交互
算法
量子力学
物理
艺术
视觉艺术
电压
作者
Chao-Yuan Wu,Philipp Krähenbühl
标识
DOI:10.1109/cvpr46437.2021.00192
摘要
Our world offers a never-ending stream of visual stimuli, yet today’s vision systems only accurately recognize patterns within a few seconds. These systems understand the present, but fail to contextualize it in past or future events. In this paper, we study long-form video understanding. We introduce a framework for modeling long-form videos and develop evaluation protocols on large-scale datasets. We show that existing state-of-the-art short-term models are limited for long-form tasks. A novel object-centric transformer-based video recognition architecture performs significantly better on 7 diverse tasks. It also outperforms comparable state-of-the-art on the AVA dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI