借记
对抗制
计算机科学
独创性
数据科学
假新闻
人工智能
机器学习
计算机安全
互联网隐私
心理学
社会心理学
创造力
作者
Karen M. DSouza,Aaron M. French
标识
DOI:10.1108/intr-03-2022-0176
摘要
Purpose Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection. Design/methodology/approach First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks. Findings This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems. Originality/value Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.
科研通智能强力驱动
Strongly Powered by AbleSci AI