计算机科学
人工智能
词汇
动作(物理)
动作识别
语音识别
模式识别(心理学)
自然语言处理
语言学
哲学
物理
量子力学
班级(哲学)
作者
Zhiheng Li,Yujie Zhong,Ran Song,Tianjiao Li,Lin Ma,Wei Zhang
标识
DOI:10.1109/tpami.2024.3395778
摘要
Pre-trained visual-language (ViL) models have demonstrated good zero-shot capability in video understanding tasks, where they were usually adapted through fine-tuning or temporal modeling. However, in the task of open-vocabulary temporal action localization (OV-TAL), such adaption reduces the robustness of ViL models against different data distributions, leading to a misalignment between visual representations and text descriptions of unseen action categories. As a result, existing methods often strike a trade-off between action detection and classification. Aiming at this issue, this paper proposes DeTAL, a simple but effective two-stage approach for OV-TAL. DeTAL decouples action detection from action classification to avoid the compromise between them, and the state-of-the-art methods for close-set action localization can be handily adapted to OV-TAL, which significantly improves the performance. Meanwhile, DeTAL can easily tackle the scenario where action category annotations are unavailable in the training dataset. In the experiments, we propose a new cross-dataset setting to evaluate the zero-shot capability of different methods. And the results demonstrate that DeTAL outperforms the state-of-the-art methods for OV-TAL on both THUMOS14 and ActivityNet1.3. Code and data are publicly available at https://github.com/vsislab/DeTAL.
科研通智能强力驱动
Strongly Powered by AbleSci AI