计算机科学
概括性
任务(项目管理)
简单(哲学)
领域(数学分析)
情绪分析
学习迁移
领域(数学)
人工智能
弹丸
自然语言处理
机器学习
心理学
数学分析
哲学
化学
数学
管理
有机化学
认识论
纯数学
经济
心理治疗师
作者
Zengzhi Wang,Qiming Xie,Rui Xia
标识
DOI:10.1145/3539618.3591940
摘要
The pre-training and fine-tuning paradigm has become the main-stream framework in the field of Aspect-Based Sentiment Analysis (ABSA). Although it has achieved sound performance in the domains containing enough fine-grained aspect-sentiment annotations, it is still challenging to conduct few-shot ABSA in domains where manual annotations are scarce. In this work, we argue that two kinds of gaps, i.e., domain gap and objective gap, hinder the transfer of knowledge from pre-training language models (PLMs) to ABSA tasks. To address this issue, we introduce a simple yet effective framework called FS-ABSA, which involves domain-adaptive pre-training and text-infilling fine-tuning. We approach the End-to-End ABSA task as a text-infilling problem and perform domain-adaptive pre-training with the text-infilling objective, narrowing the two gaps and consequently facilitating the knowledge transfer. Experiments show that the resulting model achieves more compelling performance than baselines under the few-shot setting while driving the state-of-the-art performance to a new level across datasets under the fully-supervised setting. Moreover, we apply our framework to two non-English low-resource languages to demonstrate its generality and effectiveness.
科研通智能强力驱动
Strongly Powered by AbleSci AI