反事实思维
透明度(行为)
软件部署
利益相关者
计算机科学
调试
特征(语言学)
过程管理
知识管理
人工智能
业务
心理学
政治学
公共关系
软件工程
计算机安全
哲学
社会心理学
程序设计语言
语言学
作者
Umang Bhatt,Alice Xiang,Shubham Sharma,Adrian Weller,Ankur Taly,Yunhan Jia,Joydeep Ghosh,Ruchir Puri,José M. F. Moura,Peter Eckersley
出处
期刊:Cornell University - arXiv
日期:2019-09-13
被引量:4
摘要
Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little understanding of how organizations use these methods in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that, currently, the majority of deployments are not for end users affected by the model but rather for machine learning engineers, who use explainability to debug the model itself. There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations of current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability. We end by discussing concerns raised regarding explainability.
科研通智能强力驱动
Strongly Powered by AbleSci AI