医疗保健
透明度(行为)
知识管理
背景(考古学)
利益相关者
探索性研究
透视图(图形)
主题
业务
计算机科学
心理学
政治学
公共关系
社会学
地理
人工智能
社会科学
课程
考古
法学
计算机安全
教育学
作者
Julie Gerlings,Millie Søndergaard Jensen,Arisa Shollo
出处
期刊:Intelligent systems reference library
日期:2021-11-26
卷期号:: 169-198
被引量:21
标识
DOI:10.1007/978-3-030-83620-7_7
摘要
Advances in AI technologies have resulted in superior levels of AI-based model performance. However, this has also led to a greater degree of model complexity, resulting in “black box” models. In response to the AI black box problem, the field of explainable AI (xAI) has emerged with the aim of providing explanations catered to human understanding, trust, and transparency. Yet, we still have a limited understanding of how xAI addresses the need for explainable AI in the context of healthcare. Our research explores the differing explanation needs amongst stakeholders during the development of an AI-system for classifying COVID-19 patients for the ICU. We demonstrate that there is a constellation of stakeholders who have different explanation needs, not just the “user”. Further, the findings demonstrate how the need for xAI emerges through concerns associated with specific stakeholder groups i.e., the development team, subject matter experts, decision makers, and the audience. Our findings contribute to the expansion of xAI by highlighting that different stakeholders have different explanation needs. From a practical perspective, the study provides insights on how AI systems can be adjusted to support different stakeholders’ needs, ensuring better implementation and operation in a healthcare context.
科研通智能强力驱动
Strongly Powered by AbleSci AI