计算机科学
解析
模式
人工智能
背景(考古学)
任务(项目管理)
语义学(计算机科学)
事件(粒子物理)
上下文模型
机器学习
语音识别
自然语言处理
对象(语法)
古生物学
社会科学
物理
管理
量子力学
社会学
经济
生物
程序设计语言
作者
Xun Jiang,Xing Xu,Zhiguo Chen,Jingran Zhang,Jingkuan Song,Fumin Shen,Huimin Lu,Heng Tao Shen
标识
DOI:10.1145/3503161.3548309
摘要
The Weakly-Supervised Audio-Visual Video Parsing (AVVP) task aims to parse a video into temporal segments and predict their event categories in terms of modalities, labeling them as either audible, visible, or both. Since the temporal boundaries and modalities annotations are not provided, only video-level event labels are available, this task is more challenging than conventional video understanding tasks.Most previous works attempt to analyze videos by jointly modeling the audio and video data and then learning information from the segment-level features with fixed lengths. However, such a design exist two defects: 1) The various semantic information hidden in temporal lengths is neglected, which may lead the models to learn incorrect information; 2) Due to the joint context modeling, the unique features of different modalities are not fully explored. In this paper, we propose a novel AVVP framework termedDual Hierarchical Hybrid Network (DHHN) to tackle the above two problems. Our DHHN method consists of three components: 1) A hierarchical context modeling network for extracting different semantics in multiple temporal lengths; 2) A modality-wise guiding network for learning unique information from different modalities; 3) A dual-stream framework generating audio and visual predictions separately. It maintains the best adaptions on different modalities, further boosting the video parsing performance. Extensive quantitative and qualitative experiments demonstrate that our proposed method establishes the new state-of-the-art performance on the AVVP task.
科研通智能强力驱动
Strongly Powered by AbleSci AI