计算机科学
适配器(计算)
标杆管理
背景(考古学)
人工智能
自然语言处理
生物
操作系统
业务
古生物学
营销
作者
Chen Zhang,Ren Liu,Fang Ma,Jingang Wang,Wei Wu,Dandan Song
出处
期刊:Cornell University - arXiv
日期:2022-09-02
标识
DOI:10.48550/arxiv.2209.00820
摘要
Structural bias has recently been exploited for aspect sentiment triplet extraction (ASTE) and led to improved performance. On the other hand, it is recognized that explicitly incorporating structural bias would have a negative impact on efficiency, whereas pretrained language models (PLMs) can already capture implicit structures. Thus, a natural question arises: Is structural bias still a necessity in the context of PLMs? To answer the question, we propose to address the efficiency issues by using an adapter to integrate structural bias in the PLM and using a cheap-to-compute relative position structure in place of the syntactic dependency structure. Benchmarking evaluation is conducted on the SemEval datasets. The results show that our proposed structural adapter is beneficial to PLMs and achieves state-of-the-art performance over a range of strong baselines, yet with a light parameter demand and low latency. Meanwhile, we give rise to the concern that the current evaluation default with data of small scale is under-confident. Consequently, we release a large-scale dataset for ASTE. The results on the new dataset hint that the structural adapter is confidently effective and efficient to a large scale. Overall, we draw the conclusion that structural bias shall still be a necessity even with PLMs.
科研通智能强力驱动
Strongly Powered by AbleSci AI