亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Attention-based multimodal sentiment analysis and emotion recognition using deep neural networks

计算机科学 判别式 模式 情绪分析 人工智能 模态(人机交互) 特征(语言学) 可视化 特征提取 机器学习 深度学习 模式识别(心理学) 社会科学 语言学 哲学 社会学
作者
Ajwa Aslam,Allah Bux Sargano,Zulfiqar Habib
出处
期刊:Applied Soft Computing [Elsevier]
卷期号:144: 110494-110494 被引量:18
标识
DOI:10.1016/j.asoc.2023.110494
摘要

There has been a growing interest in multimodal sentiment analysis and emotion recognition in recent years due to its wide range of practical applications. Multiple modalities allow for the integration of complementary information, improving the accuracy and precision of sentiment and emotion recognition tasks. However, working with multiple modalities presents several challenges, including handling data source heterogeneity, fusing information, aligning and synchronizing modalities, and designing effective feature extraction techniques that capture discriminative information from each modality. This paper introduces a novel framework called "Attention-based Multimodal Sentiment Analysis and Emotion Recognition (AMSAER)" to address these challenges. This framework leverages intra-modality discriminative features and inter-modality correlations in visual, audio, and textual modalities. It incorporates an attention mechanism to facilitate sentiment and emotion classification based on visual, textual, and acoustic inputs by emphasizing relevant aspects of the task. The proposed approach employs separate models for each modality to automatically extract discriminative semantic words, image regions, and audio features. A deep hierarchical model is then developed, incorporating intermediate fusion to learn hierarchical correlations between the modalities at bimodal and trimodal levels. Finally, the framework combines four distinct models through decision-level fusion to enable multimodal sentiment analysis and emotion recognition. The effectiveness of the proposed framework is demonstrated through extensive experiments conducted on the publicly available Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset. The results confirm a notable performance improvement compared to state-of-the-art methods, attaining 85% and 93% accuracy for sentiment analysis and emotion classification, respectively. Additionally, when considering class-wise accuracy, the results indicate that the "angry" emotion and "positive" sentiment are classified more effectively than the other emotions and sentiments, achieving 96.80% and 93.14% accuracy, respectively.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
26秒前
星岛完成签到 ,获得积分10
29秒前
从容梦玉发布了新的文献求助10
31秒前
今后应助从容梦玉采纳,获得10
52秒前
希望天下0贩的0应助hy采纳,获得10
55秒前
1分钟前
hy发布了新的文献求助10
1分钟前
1分钟前
菜菜发布了新的文献求助10
1分钟前
李健的小迷弟应助YY采纳,获得10
2分钟前
2分钟前
2分钟前
YY发布了新的文献求助10
2分钟前
从容梦玉发布了新的文献求助10
2分钟前
Demi_Ming完成签到,获得积分10
2分钟前
2分钟前
2分钟前
Zzz_Carlos完成签到 ,获得积分10
2分钟前
幽默元正发布了新的文献求助10
2分钟前
失眠的怀柔完成签到 ,获得积分10
2分钟前
幽默元正完成签到,获得积分10
3分钟前
3分钟前
酥酥发布了新的文献求助10
3分钟前
丘比特应助从容梦玉采纳,获得30
3分钟前
zzz完成签到 ,获得积分10
3分钟前
Jasper应助少喵几句采纳,获得10
3分钟前
4分钟前
4分钟前
充电宝应助科研通管家采纳,获得10
4分钟前
Mattya发布了新的文献求助10
4分钟前
科研通AI5应助小9采纳,获得10
5分钟前
5分钟前
小9发布了新的文献求助10
5分钟前
从容梦玉关注了科研通微信公众号
5分钟前
慕青应助freya采纳,获得10
5分钟前
科研通AI5应助小9采纳,获得10
5分钟前
5分钟前
从容梦玉发布了新的文献求助30
6分钟前
卫卫发布了新的文献求助20
6分钟前
激动的似狮完成签到,获得积分10
6分钟前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Structural Load Modelling and Combination for Performance and Safety Evaluation 1000
Conference Record, IAS Annual Meeting 1977 610
電気学会論文誌D(産業応用部門誌), 141 巻, 11 号 510
Virulence Mechanisms of Plant-Pathogenic Bacteria 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3561947
求助须知:如何正确求助?哪些是违规求助? 3135525
关于积分的说明 9412500
捐赠科研通 2835932
什么是DOI,文献DOI怎么找? 1558802
邀请新用户注册赠送积分活动 728467
科研通“疑难数据库(出版商)”最低求助积分说明 716865