接头(建筑物)
计算机科学
降噪
人工智能
模式识别(心理学)
机器学习
工程类
建筑工程
作者
Xun Jiang,Xing Xu,Huimin Lu,Lianghua He,Heng Tao Shen
出处
期刊:IEEE Transactions on Fuzzy Systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-14
被引量:2
标识
DOI:10.1109/tfuzz.2024.3405541
摘要
Multimodal Sentiment Analysis (MSA) aims at teaching computers or robotics to understand human sentiment with diverse multimodal signals, including audio, vision, and text. Current MSA approaches primarily concentrate on devising fusion strategies for multimodal signals and trying to learn better multimodal joint representations. However, employing multimodal signals directly is not appropriate since the human psychological states are fuzzy and can not be categorized easily, which undermines the effectiveness of existing methods. In this paper, we regard the natural fuzziness of human sentiments can be observed as two types: objective fuzziness introduced by human expression and subjective fuzziness caused by the complexity of human affection. Based on the assumption, we proposed a novel method termed Joint Objective and Subjective Fuzziness Denoising (JOSFD) , which introduced fuzzy logic into the multimodal fusion process and sentiment decision process to overcome the objective and subjective fuzziness. Specifically, our JOSFD method contains two key modules: (1) Modality-Specific Fuzzification Module leveraging uncertainty estimation and fuzzy logic to overcome the influence of objective fuzziness in different modalities in multimodal fusion. (2) Attitude-Intensity Representation Disentangling that learns joint representations for human attitude and sentiment strength separately and further employs fuzzy logic to decide the sentiment analysis results. We evaluate our proposed JOSFD method on three widely used MSA benchmark datasets, CMU-MOSI, CMU-MOSEI, and CH-SIMS. Extensive experiments demonstrate our proposed JOSFD method outperforms recent state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI