情绪分析
计算机科学
独创性
卷积神经网络
深度学习
人工智能
样品(材料)
基线(sea)
价值(数学)
社会化媒体
自然语言处理
机器学习
万维网
定性研究
化学
社会学
地质学
海洋学
色谱法
社会科学
作者
Wei Shi,Jing Zhang,Shaoyi He
出处
期刊:Kybernetes
[Emerald (MCB UP)]
日期:2023-09-12
被引量:1
标识
DOI:10.1108/k-04-2023-0723
摘要
Purpose With the rapid development of short videos in China, the public has become accustomed to using short videos to express their opinions. This paper aims to solve problems such as how to represent the features of different modalities and achieve effective cross-modal feature fusion when analyzing the multi-modal sentiment of Chinese short videos (CSVs). Design/methodology/approach This paper aims to propose a sentiment analysis model MSCNN-CPL-CAFF using multi-scale convolutional neural network and cross attention fusion mechanism to analyze the CSVs. The audio-visual and textual data of CSVs themed on “COVID-19, catering industry” are collected from CSV platform Douyin first, and then a comparative analysis is conducted with advanced baseline models. Findings The sample number of the weak negative and neutral sentiment is the largest, and the sample number of the positive and weak positive sentiment is relatively small, accounting for only about 11% of the total samples. The MSCNN-CPL-CAFF model has achieved the Acc-2, Acc-3 and F1 score of 85.01%, 74.16 and 84.84%, respectively, which outperforms the highest value of baseline methods in accuracy and achieves competitive computation speed. Practical implications This research offers some implications regarding the impact of COVID-19 on catering industry in China by focusing on multi-modal sentiment of CSVs. The methodology can be utilized to analyze the opinions of the general public on social media platform and to categorize them accordingly. Originality/value This paper presents a novel deep-learning multimodal sentiment analysis model, which provides a new perspective for public opinion research on the short video platform.
科研通智能强力驱动
Strongly Powered by AbleSci AI