计算机科学
情绪分析
情态动词
模式
任务(项目管理)
人工智能
模态(人机交互)
光学(聚焦)
代表(政治)
自然语言处理
深度学习
判别式
特征学习
社会化媒体
万维网
社会科学
政治
法学
高分子化学
管理
化学
经济
政治学
社会学
物理
光学
作者
Sun Zhang,Chunyong Yin,Zhichao Yin
出处
期刊:IEEE transactions on emerging topics in computational intelligence
[Institute of Electrical and Electronics Engineers]
日期:2022-12-01
卷期号:7 (1): 200-209
被引量:13
标识
DOI:10.1109/tetci.2022.3224929
摘要
Sentiment recognition in social network aims at recognizing the underlying affective states of user-generated content. The research center of sentiment recognition is moving from pure texts to multimodal contents, with the explosive growth of social platforms. Different from the image-text contents on blog and review platforms, multimodal sequences play a dominant role in streaming media, e.g. YouTube and TikTok. Sentiment recognition for multimodal sequences needs to extract the common and specific information of modalities. But current studies only focus on learning the cross-modal fusion representation to explore the inter-modal interaction, while neglecting the interactions and characteristics within each modality. We propose a novel cascade and specific scoring model, which aims at learning better cross-modal and unimodal representations to capture both the inter- and intra-modal interactions for sentiment recognition. Qualitative and quantitative experiments on two benchmarks have demonstrated the competitive performances of the proposed methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI