稳健性(进化)
计算机科学
人工智能
认识论
背景(考古学)
机器学习
数据科学
历史
哲学
生物化学
化学
基因
考古
标识
DOI:10.1145/3351095.3372836
摘要
The explainable AI literature contains multiple notions of what an explanation is and what desiderata explanations should satisfy. One implicit source of disagreement is how far the explanations should reflect real patterns in the data or the world. This disagreement underlies debates about other desiderata, such as how robust explanations are to slight perturbations in the input data. I argue that robustness is desirable to the extent that we're concerned about finding real patterns in the world. The import of real patterns differs according to the problem context. In some contexts, non-robust explanations can constitute a moral hazard. By being clear about the extent to which we care about capturing real patterns, we can also determine whether the Rashomon Effect is a boon or a bane.
科研通智能强力驱动
Strongly Powered by AbleSci AI