计算机科学
语义学(计算机科学)
事件(粒子物理)
人工智能
视听
背景(考古学)
相似性(几何)
余弦相似度
光学(聚焦)
模式识别(心理学)
可视化
代表(政治)
语音识别
图像(数学)
生物
程序设计语言
古生物学
多媒体
物理
量子力学
政治
法学
政治学
光学
作者
Fan Feng,Yue Ming,Nannan Hu,Hui Yu,Yuanan Liu
标识
DOI:10.1109/tmm.2023.3270624
摘要
Audio-visual event (AVE) localization aims to localize the temporal boundaries of events that contains visual and audio contents, to identify event categories in unconstrained videos. Existing work usually utilizes successive video segments for temporal modeling. However, ambient sounds or irrelevant visual targets in some segments often cause the problem of audio-visual semantics inconsistency, resulting in inaccurate global event modeling. To tackle this issue, we present a consistent segment selection network (CSS-Net) in this paper. First, we propose a novel bidirectional guided co-attention (BGCA) block, containing two distinct attention paths from audio to vision and from vision to audio, to focus on sound-related visual regions and event-related sound segments. Then, we propose a novel context-aware similarity measure (CASM) module to select semantic consistent visual and audio segments. A cross-correlation matrix is constructed using the correlation coefficients between the visual and audio feature pairs in all time steps. By extracting highly correlated segments and discarding low correlated segments, visual and audio features can learn global event semantics in videos. Finally, we propose a novel audio-visual contrastive loss to learn the similar semantics representation for visual and audio global features under the constraints of cosine and L2 similarities. Extensive experiments on public AVE dataset demonstrates the effectiveness of our proposed CSS-Net. The localization accuracies achieve the best performance of 80.5% and 76.8% in both fully- and weakly-supervised settings compared with other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI