清晰
计算机科学
可视化
透明度(行为)
人工智能
认知负荷
认知
介绍(产科)
心理学
计算机安全
生物化学
医学
放射科
神经科学
化学
作者
Antoine Hudon,Théophile Demazure,Alexander J. Karran,Pierre‐Majorique Léger,Sylvain Sénécal
出处
期刊:Lecture notes in information systems and organisation
日期:2021-01-01
卷期号:: 237-246
被引量:19
标识
DOI:10.1007/978-3-030-88900-5_27
摘要
Explainable Artificial Intelligence (XAI) aims to bring transparency to AI systems by translating, simplifying, and visualizing its decisions. While society remains skeptical about AI systems, studies show that transparent and explainable AI systems result in improved confidence between humans and AI. We present preliminary results from a study designed to assess two presentation-order methods and three AI decision visualization attribution models to determine each visualization's impact upon a user's cognitive load and confidence in the system by asking participants to complete a visual decision-making task. The results show that both the presentation order and the morphological clarity impact cognitive load. Furthermore, a negative correlation was revealed between cognitive load and confidence in the AI system. Our findings have implications for future AI systems design, which may facilitate better collaboration between humans and AI.
科研通智能强力驱动
Strongly Powered by AbleSci AI