计算机科学
自然语言处理
人工智能
编码器
学习迁移
变压器
领域(数学分析)
语言模型
可用性
人机交互
数学
量子力学
操作系统
物理
数学分析
电压
作者
Arkadipta De,Dibyanayan Bandyopadhyay,Baban Gain,Asif Ekbal
出处
期刊:ACM Transactions on Asian and Low-Resource Language Information Processing
日期:2021-11-02
卷期号:21 (1): 1-20
被引量:34
摘要
Fake news classification is one of the most interesting problems that has attracted huge attention to the researchers of artificial intelligence, natural language processing, and machine learning (ML). Most of the current works on fake news detection are in the English language, and hence this has limited its widespread usability, especially outside the English literate population. Although there has been a growth in multilingual web content, fake news classification in low-resource languages is still a challenge due to the non-availability of an annotated corpus and tools. This article proposes an effective neural model based on the multilingual Bidirectional Encoder Representations from Transformer (BERT) for domain-agnostic multilingual fake news classification. Large varieties of experiments, including language-specific and domain-specific settings, are conducted. The proposed model achieves high accuracy in domain-specific and domain-agnostic experiments, and it also outperforms the current state-of-the-art models. We perform experiments on zero-shot settings to assess the effectiveness of language-agnostic feature transfer across different languages, showing encouraging results. Cross-domain transfer experiments are also performed to assess language-independent feature transfer of the model. We also offer a multilingual multidomain fake news detection dataset of five languages and seven different domains that could be useful for the research and development in resource-scarce scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI