Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units

语音识别 面子(社会学概念) 计算机科学 融合 动作(物理) 心理学 人机交互 自然语言处理 语言学 哲学 物理 量子力学
作者
Zhilei Liu,Xiaoxing Liu,Sen Chen,Jiaxing Liu,Longbiao Wang,Chongke Bi
出处
期刊:ACM Transactions on Multimedia Computing, Communications, and Applications [Association for Computing Machinery]
标识
DOI:10.1145/3672565
摘要

Talking face generation is to synthesize a lip-synchronized talking face video by inputting an arbitrary face image and corresponding audio clips. The current talking face model can be divided into four parts: visual feature extraction, audio feature processing, multimodal feature fusion, and rendering module. For the visual feature extraction part, existing methods face the challenge of complex learning task with noisy features, this paper introduces an attention-based disentanglement module to disentangle the face into Audio-face and Identity-face using speech-related facial action unit (AU) information. For the multimodal feature fusion part, existing methods ignore not only the interaction and relationship of cross-modal information but also the local driving information of the mouth muscles. This study proposes a novel generative framework that incorporates a dilated non-causal temporal convolutional self-attention network as a multimodal fusion module to enhance the learning of cross-modal features. The proposed method employs both audio- and speech-related facial action units (AUs) as driving information. Speech-related AU information can facilitate more accurate mouth movements. Given the high correlation between speech and speech-related AUs, we propose an audio-to-AU module to predict speech-related AU information. Finally, we present a diffusion model for the synthesis of talking face images. We verify the effectiveness of the proposed model on the GRID and TCD-TIMIT datasets. An ablation study is also conducted to verify the contribution of each component. The results of quantitative and qualitative experiments demonstrate that our method outperforms existing methods in terms of both image quality and lip-sync accuracy. Code is available at https://mftfg-au.github.io/Multimodal_Fusion/ .

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Eileen完成签到,获得积分10
刚刚
发一篇sci完成签到 ,获得积分10
1秒前
儒雅八宝粥完成签到 ,获得积分10
1秒前
华仔应助吉他平方采纳,获得10
1秒前
1秒前
可爱的函函应助hhhh_xt采纳,获得10
1秒前
夏侯觅风发布了新的文献求助10
1秒前
2秒前
my完成签到,获得积分10
3秒前
3秒前
乐乐应助蛙趣采纳,获得10
4秒前
早川发布了新的文献求助10
4秒前
5秒前
英俊剑鬼完成签到,获得积分10
5秒前
活泼火水完成签到,获得积分10
5秒前
6秒前
美文发布了新的文献求助10
6秒前
核桃酥应助berg采纳,获得10
6秒前
za==发布了新的文献求助30
7秒前
领导范儿应助kaisa采纳,获得10
7秒前
8秒前
8秒前
8秒前
端庄的以寒完成签到,获得积分10
9秒前
lily完成签到,获得积分10
10秒前
10秒前
寻舟者完成签到,获得积分10
12秒前
不爱吃苹果完成签到,获得积分10
12秒前
xcs完成签到,获得积分10
13秒前
14秒前
15秒前
15秒前
斯文败类应助LiQi采纳,获得10
15秒前
16秒前
科研通AI2S应助科研通管家采纳,获得10
17秒前
英俊的铭应助科研通管家采纳,获得10
17秒前
Zn应助科研通管家采纳,获得10
17秒前
10完成签到 ,获得积分10
17秒前
FashionBoy应助科研通管家采纳,获得10
17秒前
华仔应助科研通管家采纳,获得10
17秒前
高分求助中
Continuum Thermodynamics and Material Modelling 4000
Production Logging: Theoretical and Interpretive Elements 2700
Ensartinib (Ensacove) for Non-Small Cell Lung Cancer 1000
Les Mantodea de Guyane Insecta, Polyneoptera 1000
Unseen Mendieta: The Unpublished Works of Ana Mendieta 1000
El viaje de una vida: Memorias de María Lecea 800
Luis Lacasa - Sobre esto y aquello 700
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3525113
求助须知:如何正确求助?哪些是违规求助? 3105858
关于积分的说明 9276751
捐赠科研通 2803146
什么是DOI,文献DOI怎么找? 1538444
邀请新用户注册赠送积分活动 716232
科研通“疑难数据库(出版商)”最低求助积分说明 709319