事件(粒子物理)
计算机科学
情报检索
物理
量子力学
作者
Zongyang Ma,Ziqi Zhang,Yuxin Chen,Zhongang Qi,Chunfeng Yuan,Bing Li,Yingmin Luo,Xu Li,Xiaojuan Qi,Ying Shan,Weiming Hu
出处
期刊:Cornell University - arXiv
日期:2024-07-10
标识
DOI:10.48550/arxiv.2407.07478
摘要
Understanding the content of events occurring in the video and their inherent temporal logic is crucial for video-text retrieval. However, web-crawled pre-training datasets often lack sufficient event information, and the widely adopted video-level cross-modal contrastive learning also struggles to capture detailed and complex video-text event alignment. To address these challenges, we make improvements from both data and model perspectives. In terms of pre-training data, we focus on supplementing the missing specific event content and event temporal transitions with the proposed event augmentation strategies. Based on the event-augmented data, we construct a novel Event-Aware Video-Text Retrieval model, ie, EA-VTR, which achieves powerful video-text retrieval ability through superior video event awareness. EA-VTR can efficiently encode frame-level and video-level visual representations simultaneously, enabling detailed event content and complex event temporal cross-modal alignment, ultimately enhancing the comprehensive understanding of video events. Our method not only significantly outperforms existing approaches on multiple datasets for Text-to-Video Retrieval and Video Action Recognition tasks, but also demonstrates superior event content perceive ability on Multi-event Video-Text Retrieval and Video Moment Retrieval tasks, as well as outstanding event temporal logic understanding ability on Test of Time task.
科研通智能强力驱动
Strongly Powered by AbleSci AI