计算机科学
模式
情绪分析
背景(考古学)
人工智能
多模态
机器学习
帧(网络)
一致性(知识库)
语义学(计算机科学)
古生物学
社会科学
电信
社会学
万维网
生物
程序设计语言
作者
Maochun Huang,Chunmei Qing,Junpeng Tan,Xiangmin Xu
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:31: 3468-3477
被引量:4
标识
DOI:10.1109/taslp.2023.3321971
摘要
Recently, video sentiment computing has become the focus of research because of its benefits in many applications such as digital marketing, education, healthcare, and so on. The difficulty of video sentiment prediction mainly lies in the regression accuracy of long-term sequences and how to integrate different modalities. In particular, different modalities may express different emotions. In order to maintain the continuity of long time-series sentiments and mitigate the multimodal conflicts, this paper proposes a novel Context-Based Adaptive Multimodal Fusion Network (CAMFNet) for consecutive frame-level sentiment prediction. A Context-based Transformer (CBT) module was specifically designed to embed clip features into continuous frame features, leveraging its capability to enhance the consistency of prediction results. Moreover, to resolve the multi-modal conflict between modalities, this paper proposed an Adaptive multimodal fusion (AMF) method based on the self-attention mechanism. It can dynamically determines the degree of shared semantics across modalities, enabling the model to flexibly adapt its fusion strategy. Through adaptive fusion of multimodal features, the AMF method effectively resolves potential conflicts arising from diverse modalities, ultimately enhancing the overall performance of the model. The proposed CAMFNet for consecutive frame-level sentiment prediction can ensure the continuity of long time-series sentiments. Extensive experiments illustrate the superiority of the proposed method especially in multimodal conflicts videos.
科研通智能强力驱动
Strongly Powered by AbleSci AI