BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge

计算机科学 自举(财务) 视听 基础(证据) 分割 图像分割 人工智能 自然语言处理 多媒体 财务 考古 经济 历史
作者
Chen Liu,Peike Li,Hu Zhang,Lincheng Li,Zi Huang,Dadong Wang,Xin Yu
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:: 1-13 被引量:5
标识
DOI:10.1109/tmm.2024.3405622
摘要

Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge $^{1}$ In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. Meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. Thus, in the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. We then examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise. Our project website is https://yenanliu.github.io/AVSS.github.io/ .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Kwin完成签到,获得积分20
刚刚
完美世界应助theverve采纳,获得50
1秒前
华仔应助细腻心锁采纳,获得10
1秒前
1秒前
HHH发布了新的文献求助10
3秒前
子夜的流星完成签到,获得积分10
3秒前
闪闪的迎天应助悟空采纳,获得30
3秒前
JamesPei应助悟空采纳,获得200
3秒前
4秒前
沙漠水发布了新的文献求助10
4秒前
Ava应助dwj采纳,获得10
4秒前
深情安青应助一YI采纳,获得10
5秒前
科研废物发布了新的文献求助10
6秒前
鲸落发布了新的文献求助10
7秒前
9秒前
秦之之完成签到,获得积分20
9秒前
慕容浩然完成签到,获得积分10
9秒前
9秒前
SYLH应助nns采纳,获得10
12秒前
TT发布了新的文献求助20
13秒前
SYLH应助Chen272采纳,获得10
13秒前
猪猪hero应助科研通管家采纳,获得10
13秒前
SYLH应助科研通管家采纳,获得10
13秒前
猪猪hero应助科研通管家采纳,获得10
14秒前
隐形曼青应助科研通管家采纳,获得10
14秒前
NexusExplorer应助科研通管家采纳,获得10
14秒前
CipherSage应助科研通管家采纳,获得10
14秒前
脑洞疼应助科研通管家采纳,获得10
14秒前
完美世界应助科研通管家采纳,获得10
14秒前
Jasper应助科研通管家采纳,获得10
14秒前
猪猪hero应助科研通管家采纳,获得10
14秒前
科研通AI5应助科研通管家采纳,获得10
14秒前
慕青应助科研通管家采纳,获得10
14秒前
大模型应助科研通管家采纳,获得10
14秒前
爆米花应助科研通管家采纳,获得10
14秒前
deng203发布了新的文献求助10
15秒前
16秒前
18秒前
城南烤地瓜完成签到 ,获得积分10
18秒前
18秒前
高分求助中
All the Birds of the World 4000
Production Logging: Theoretical and Interpretive Elements 3000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Machine Learning Methods in Geoscience 1000
Resilience of a Nation: A History of the Military in Rwanda 888
Crystal Nonlinear Optics: with SNLO examples (Second Edition) 500
Essentials of Performance Analysis in Sport 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3732805
求助须知:如何正确求助?哪些是违规求助? 3276926
关于积分的说明 9999703
捐赠科研通 2992607
什么是DOI,文献DOI怎么找? 1642376
邀请新用户注册赠送积分活动 780360
科研通“疑难数据库(出版商)”最低求助积分说明 748728