计算机科学
杠杆(统计)
情绪分析
编码器
宏
人工智能
变压器
自然语言处理
适应性
机器学习
标记数据
数据挖掘
物理
电压
操作系统
生物
程序设计语言
量子力学
生态学
作者
Xinhua Zhu,Zhongjie Kuang,Lanfang Zhang
标识
DOI:10.1016/j.ipm.2023.103462
摘要
Recently, pre-trained language models (PLMs), especially pre-trained bidirectional encoder representations from transformers (BERT), have improved the performance of aspect-based sentiment analysis (ABSA) tasks to some extent. However, due to the imbalance of training data in different polarities, the following shortcomings remain in PLM-based ABSA methods: (1) for small corpus scenarios with polarized emotions, an unbalanced performance problem exists; and (2) for delicate and obscure scenes dominated by neutral emotions, PLM-based performance gains are limited. To address these shortcomings, we use BERT as an instance of PLMs to propose a general-purpose prompt model with combined semantic refinement for ABSA. First, we utilize a BERT without fine-tuning to automatically induce prompts for various ABSA datasets to enhance the adaptability of the model to different application scenarios. We then leverage multi-prompt learning to propose a data augmentation method to address the imbalance of training data in different polarities. Moreover, to further deepen the model's understanding and analysis of reviews with prompts, we also propose an improved BERT semantic refinement method that combines global semantic refinement and local semantic extraction. Experiments on five public datasets show that compared with existing methods, our macro-average F1 improvement is over 10% on polarized small datasets and over 7% on an emotionally delicate and obscure dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI