A Multimodal Saliency Model for Videos With High Audio-Visual Correspondence

计算机科学 人工智能 视听 Kadir–Brady显著性检测器 计算机视觉 显著性图 可视化 情态动词 模态(人机交互) 模式识别(心理学) 语音识别 图像(数学) 多媒体 化学 高分子化学
作者
Xiongkuo Min,Guangtao Zhai,Jiantao Zhou,Xiao–Ping Zhang,Xiaokang Yang,Xinping Guan
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:29: 3805-3819 被引量:179
标识
DOI:10.1109/tip.2020.2966082
摘要

Audio information has been bypassed by most of current visual attention prediction studies. However, sound could have influence on visual attention and such influence has been widely investigated and proofed by many psychological studies. In this paper, we propose a novel multi-modal saliency (MMS) model for videos containing scenes with high audio-visual correspondence. In such scenes, humans tend to be attracted by the sound sources and it is also possible to localize the sound sources via cross-modal analysis. Specifically, we first detect the spatial and temporal saliency maps from the visual modality by using a novel free energy principle. Then we propose to detect the audio saliency map from both audio and visual modalities by localizing the moving-sounding objects using cross-modal kernel canonical correlation analysis, which is first of its kind in the literature. Finally we propose a new two-stage adaptive audiovisual saliency fusion method to integrate the spatial, temporal and audio saliency maps to our audio-visual saliency map. The proposed MMS model has captured the influence of audio, which is not considered in the latest deep learning based saliency models. To take advantages of both deep saliency modeling and audio-visual saliency modeling, we propose to combine deep saliency models and the MMS model via a later fusion, and we find that an average of 5% performance gain is obtained. Experimental results on audio-visual attention databases show that the introduced models incorporating audio cues have significant superiority over state-of-the-art image and video saliency models which utilize a single visual modality.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
chall应助一二三四11采纳,获得10
2秒前
2秒前
illi发布了新的文献求助10
3秒前
3秒前
科研通AI6应助科研通管家采纳,获得10
4秒前
上官若男应助科研通管家采纳,获得10
4秒前
4秒前
wanci应助科研通管家采纳,获得10
4秒前
4秒前
4秒前
852应助科研通管家采纳,获得10
4秒前
Alex应助科研通管家采纳,获得30
4秒前
小蘑菇应助科研通管家采纳,获得10
4秒前
丘比特应助科研通管家采纳,获得10
4秒前
小马甲应助Xinzz采纳,获得10
4秒前
Owen应助科研通管家采纳,获得10
4秒前
猪猪hero应助科研通管家采纳,获得10
4秒前
4秒前
猪猪hero应助科研通管家采纳,获得10
5秒前
JamesPei应助科研通管家采纳,获得10
5秒前
酷波er应助科研通管家采纳,获得10
5秒前
猪猪hero应助科研通管家采纳,获得10
5秒前
Alex应助科研通管家采纳,获得30
5秒前
科研通AI6应助科研通管家采纳,获得30
5秒前
思源应助科研通管家采纳,获得10
5秒前
斯文败类应助科研通管家采纳,获得10
5秒前
5秒前
5秒前
香蕉觅云应助科研通管家采纳,获得10
5秒前
5秒前
5秒前
5秒前
天天快乐应助科研通管家采纳,获得10
5秒前
5秒前
Orange应助科研通管家采纳,获得10
5秒前
CodeCraft应助科研通管家采纳,获得10
5秒前
joyee完成签到,获得积分10
5秒前
周一发布了新的文献求助10
6秒前
6秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Reproduction Third Edition 3000
《药学类医疗服务价格项目立项指南(征求意见稿)》 1000
花の香りの秘密―遺伝子情報から機能性まで 800
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
nephSAP® Nephrology Self-Assessment Program - Hypertension The American Society of Nephrology 500
Digital and Social Media Marketing 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5626820
求助须知:如何正确求助?哪些是违规求助? 4712727
关于积分的说明 14960335
捐赠科研通 4782760
什么是DOI,文献DOI怎么找? 2554542
邀请新用户注册赠送积分活动 1516181
关于科研通互助平台的介绍 1476457