启发式
计算机科学
简单(哲学)
概率逻辑
对抗制
有界函数
分数(化学)
过程(计算)
理论计算机科学
人工智能
数学
认识论
程序设计语言
数学分析
哲学
化学
有机化学
操作系统
作者
Anna Brandenberger,Cassandra Marcussen,Elchanan Mossel,Madhu Sudan
标识
DOI:10.1073/pnas.2416866122
摘要
As knowledge accumulates in science and society in a distributed fashion, erroneous derivations can be introduced into the corpus of knowledge. Such derivations can compromise the validity of any units of knowledge that rely on them in the future. Can societal knowledge maintain some level of integrity given simple distributed error-checking mechanisms? In this paper, we investigate the following formulation of the question: assuming that a constant fraction of the new derivations is wrong, is it possible for simple error-checking mechanisms that apply when a new unit of knowledge is derived to maintain the integrity of the corpus of knowledge? This question was introduced by Ben-Eliezer et al. [“Is this correct? Let’s check!” in 14th Innovations in Theoretical Computer Science Conference (ITCS, 2023)], who gave a robust affirmative answer in a specific probabilistic model for knowledge accumulation. Namely, this model required that new units depend on just one existing unit and join the process according to a preferential attachment rule. In this work, we consider much more general families of processes of knowledge accumulation, where new units may depend on multiple existing units and join according to varied attachment mechanisms. We also consider models with a (random) fraction of insertions of adversarial nodes. We give a robust affirmative answer to the above question by showing that for all of these models, as long as many of the units follow simple local heuristics for checking a bounded number of units they depend on, all errors will be eventually eliminated.
科研通智能强力驱动
Strongly Powered by AbleSci AI