黑匣子
危害
计算机科学
钥匙(锁)
人工智能
刑事司法
经济正义
机器学习
数据科学
犯罪学
心理学
计算机安全
政治学
社会心理学
法学
标识
DOI:10.1038/s42256-019-0048-x
摘要
Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
科研通智能强力驱动
Strongly Powered by AbleSci AI