论证理论
计算机科学
鉴定(生物学)
论证(复杂分析)
人工智能
适度
过程(计算)
任务(项目管理)
数据科学
论证框架
社会化媒体
事件(粒子物理)
机器学习
认识论
哲学
生物化学
植物
化学
物理
管理
量子力学
万维网
经济
生物
操作系统
标识
DOI:10.1287/isre.2020.0097
摘要
To combat false information, social media sites have heavily relied on content moderation, mostly performed by human workers. However, human content moderation entails multiple problems, including huge labor costs, ineffectiveness, and ethical issues. To overcome these concerns, social media companies are aggressively investing in the development of artificial intelligence-powered false information detection systems. Extant efforts, however, have failed to understand the nature of human argumentation, delegating the process of making inferences of the truth to the black box of neural networks. They fail to attend to important aspects of how humans make judgments on the veracity of an argument, creating important challenges. To this end, based on Toulmin’s model of argumentation, we propose a computational framework that helps machine learning for false information identification understand the connection between a claim (whose veracity needs to be verified) and evidence (which contains information to support or refute the claim). The two experiments for testing model performance and explainability reveal that our framework shows stronger performance and better explainability, outperforming cutting-edge machine learning methods and presenting positive effects on human task performance, trust in algorithms, and confidence in decision making. Our results shed new light on the growing field of automated false information identification.
科研通智能强力驱动
Strongly Powered by AbleSci AI