操作化
功能可见性
工艺
透明度(行为)
黑匣子
工程伦理学
计算机科学
社会学
公共关系
政治学
认识论
人工智能
工程类
法学
历史
人机交互
哲学
考古
作者
Upol Ehsan,Elizabeth Anne Watkins,Philipp Wintersberger,Carina Manger,Sunnie Kim,Niels van Berkel,Andreas Riener,Mark Riedl
标识
DOI:10.1145/3613905.3636311
摘要
Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just "opening" the black box — who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is "opening the black box" still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize "operationalizing." We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
科研通智能强力驱动
Strongly Powered by AbleSci AI