Multi-space channel representation learning for mono-to-binaural conversion based audio deepfake detection

立体声录音 计算机科学 编码器 频道(广播) 语音识别 不变(物理) 感知 人工智能 数学 电信 心理学 神经科学 数学物理 操作系统
作者
Rui Liu,Jinhua Zhang,Guanglai Gao
出处
期刊:Information Fusion [Elsevier BV]
卷期号:105: 102257-102257 被引量:2
标识
DOI:10.1016/j.inffus.2024.102257
摘要

Audio deepfake detection (ADD) aims to detect the fake audio generated by text-to-speech (TTS), and voice conversion (VC), etc., which is an emerging topic. Traditionally we read the mono signal and analyze the artifacts directly. Recently, the mono-to-binaural conversion based ADD approach has attracted increasing attention since the binaural audio signals provide a unique and comprehensive perspective on speech perception. Such method attempts tried to first convert the mono audio into binaural, then process the left and right channels respectively to discover authenticity cues. However, the acoustic information from the two channels exhibits both differences and similarities, which have not been thoroughly explored in previous research. To address this issue, we propose a new mono-to-binaural conversion based ADD framework that considers multi-space channel representation learning, termed "MSCR-ADD". Specifically, (1) the feature representations of the respective channels are learned by the channel-specific encoder and stored in the channel-specific space; (2) the feature representations capturing the difference between the two channels are learned by the channel-differential encoder and stored in the channel-differential space; (3) after which the channel-invariant encoder learn the channel commonality representations in the channel-invariant space. Note that we propose orthogonal and mutual information maximization losses to constrain the channel-specific and invariant encoders. At last, three representations from various spaces are mixed together to finalize the deepfake detection. It is worth noting that the feature representations in the channel-differential and invariant spaces unveil the differences and similarities between the two channels in binaural audio, enabling us to effectively detect artifacts in fake audio. The experimental results on four benchmark datasets demonstrate that our MSCR-ADD is superior to existing state-of-the-art approaches.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
LL应助标致的觅柔采纳,获得10
刚刚
GQZM完成签到,获得积分10
刚刚
1秒前
2秒前
慕青应助小青蛙OA采纳,获得10
2秒前
2秒前
奋斗书白发布了新的文献求助10
2秒前
hhhhhh发布了新的文献求助10
2秒前
2秒前
3秒前
盛夏之末应助旷野采纳,获得10
4秒前
周舟发布了新的文献求助10
4秒前
4秒前
画仲人发布了新的文献求助10
5秒前
支凌瑶完成签到,获得积分10
5秒前
7秒前
7秒前
Deadman发布了新的文献求助10
8秒前
McbxM发布了新的文献求助10
8秒前
kaili完成签到 ,获得积分10
9秒前
风铃发布了新的文献求助10
9秒前
健忘如松完成签到,获得积分10
11秒前
万能图书馆应助文献直达采纳,获得10
11秒前
高兴凡儿发布了新的文献求助10
12秒前
小青蛙OA发布了新的文献求助10
13秒前
Wang完成签到,获得积分10
13秒前
周舟完成签到,获得积分10
14秒前
Yu完成签到 ,获得积分10
17秒前
17秒前
沧笙踏歌发布了新的文献求助10
18秒前
量子星尘发布了新的文献求助10
20秒前
科目三应助高兴凡儿采纳,获得10
21秒前
雪白的巧凡完成签到,获得积分10
22秒前
23秒前
123321发布了新的文献求助10
23秒前
24秒前
CipherSage应助搞怪芷珍采纳,获得10
24秒前
lilei完成签到,获得积分20
24秒前
25秒前
乐观寄真发布了新的文献求助10
25秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Cognitive Neuroscience: The Biology of the Mind (Sixth Edition) 1000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3959547
求助须知:如何正确求助?哪些是违规求助? 3505776
关于积分的说明 11126213
捐赠科研通 3237706
什么是DOI,文献DOI怎么找? 1789252
邀请新用户注册赠送积分活动 871647
科研通“疑难数据库(出版商)”最低求助积分说明 802931