计算机科学
无礼的
词汇
人工智能
分类器(UML)
光学(聚焦)
语言模型
自然语言处理
报纸
学习迁移
训练集
机器学习
语音识别
语言学
哲学
业务
物理
经济
管理
光学
广告
作者
Andraž Pelicon,Ravi Shekhar,Blaž Škrlj,Matthew Purver,Senja Pollak
出处
期刊:PeerJ
[PeerJ]
日期:2021-06-25
卷期号:7: e559-e559
被引量:21
摘要
Platforms that feature user-generated content (social media, online forums, newspaper comment sections etc.) have to detect and filter offensive speech within large, fast-changing datasets. While many automatic methods have been proposed and achieve good accuracies, most of these focus on the English language, and are hard to apply directly to languages in which few labeled datasets exist. Recent work has therefore investigated the use of cross-lingual transfer learning to solve this problem, training a model in a well-resourced language and transferring to a less-resourced target language; but performance has so far been significantly less impressive. In this paper, we investigate the reasons for this performance drop, via a systematic comparison of pre-trained models and intermediate training regimes on five different languages. We show that using a better pre-trained language model results in a large gain in overall performance and in zero-shot transfer, and that intermediate training on other languages is effective when little target-language data is available. We then use multiple analyses of classifier confidence and language model vocabulary to shed light on exactly where these gains come from and gain insight into the sources of the most typical mistakes.
科研通智能强力驱动
Strongly Powered by AbleSci AI