TDFNet: Transformer-Based Deep-Scale Fusion Network for Multimodal Emotion Recognition

计算机科学 深度学习 人工智能 变压器 多模式学习 情感计算 情绪识别 深信不疑网络 特征学习 机器学习 工程类 电气工程 电压
作者
Zhengdao Zhao,Yuhua Wang,guang ze shen,Yuezhu Xu,Jiayuan Zhang
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing [Institute of Electrical and Electronics Engineers]
卷期号:31: 3771-3782 被引量:22
标识
DOI:10.1109/taslp.2023.3316458
摘要

As deep learning technology research continues to progress, artificial intelligence technology is gradually empowering various fields. To achieve a more natural human-computer interaction experience, how to accurately recognize emotional state of speech interactions has become a new research hotspot. Sequence modeling methods based on deep learning techniques have promoted the development of emotion recognition, but the mainstream methods still suffer from insufficient multimodal information interaction, difficulty in learning emotion-related features, and low recognition accuracy. In this paper, we propose a transformer-based deep-scale fusion network (TDFNet) for multimodal emotion recognition, solving the aforementioned problems. The multimodal embedding (ME) module in TDFNet uses pretrained models to alleviate the data scarcity problem by providing a priori knowledge of multimodal information to the model with the help of a large amount of unlabeled data. In addition, a mutual transformer (MT) module is introduced to learn multimodal emotional commonality and speaker-related emotional features to improve contextual emotional semantic understanding. In addition, we design a novel emotion feature learning method named the deep-scale transformer (DST), which further improves emotion recognition by aligning multimodal features and learning multiscale emotion features through GRUs with shared weights. To comparatively evaluate the performance of TDFNet, experiments are conducted with the IEMOCAP corpus under three reasonable data splitting strategies. The experimental results show that TDFNet achieves 82.08% WA and 82.57% UA in RA data splitting, which leads to 1.78% WA and 1.17% UA improvements over the previous state-of-the-art method, respectively. Benefiting from the attentively aligned mutual correlations and fine-grained emotion-related features, TDFNet successfully achieves significant improvements in multimodal emotion recognition.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
犹豫的行恶应助65ftfd采纳,获得10
刚刚
刚刚
幸运霖完成签到,获得积分10
刚刚
FJLSDNMV发布了新的文献求助10
1秒前
刘放完成签到,获得积分10
1秒前
完美世界应助伞下铭采纳,获得10
2秒前
2秒前
Criminology34应助炙热忆文采纳,获得10
2秒前
3秒前
LZX发布了新的文献求助30
3秒前
Yoo.发布了新的文献求助10
3秒前
九局下半发布了新的文献求助10
3秒前
3秒前
忘的澜完成签到,获得积分10
4秒前
FOD完成签到 ,获得积分10
4秒前
小二郎应助敢超采纳,获得10
4秒前
4秒前
wch071发布了新的文献求助10
4秒前
段红琼发布了新的文献求助10
4秒前
完美世界应助@@@采纳,获得10
4秒前
纪震宇发布了新的文献求助10
5秒前
cy完成签到,获得积分10
5秒前
个性的红酒完成签到,获得积分20
5秒前
5秒前
pearsir发布了新的文献求助10
6秒前
量子星尘发布了新的文献求助10
7秒前
王子心发布了新的文献求助20
8秒前
Johan发布了新的文献求助10
8秒前
Yoo.完成签到,获得积分10
9秒前
罗马没有马完成签到 ,获得积分10
9秒前
刺1656发布了新的文献求助10
10秒前
wangli发布了新的文献求助10
10秒前
意明完成签到,获得积分10
10秒前
顾矜应助乐乐采纳,获得10
10秒前
10秒前
10秒前
甜甜冰巧发布了新的文献求助10
10秒前
11秒前
工位瘤子完成签到,获得积分10
12秒前
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Binary Alloy Phase Diagrams, 2nd Edition 8000
Building Quantum Computers 800
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
Natural Product Extraction: Principles and Applications 500
Exosomes Pipeline Insight, 2025 500
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5665774
求助须知:如何正确求助?哪些是违规求助? 4878319
关于积分的说明 15115461
捐赠科研通 4825051
什么是DOI,文献DOI怎么找? 2583021
邀请新用户注册赠送积分活动 1537048
关于科研通互助平台的介绍 1495446