任务(项目管理)
建设性的
计算机科学
鉴定(生物学)
感知
知识管理
人工智能
数据科学
过程(计算)
心理学
管理
植物
生物
操作系统
经济
神经科学
作者
Sunnie Kim,Elizabeth Anne Watkins,Olga Russakovsky,Ruth Fong,Andrés Monroy‐Hernández
标识
DOI:10.1145/3544548.3581001
摘要
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users’ explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI’s outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.
科研通智能强力驱动
Strongly Powered by AbleSci AI