人在回路中
调试
计算机科学
最终用户
背景(考古学)
人机交互
领域(数学分析)
人工智能
万维网
数学
生物
数学分析
古生物学
程序设计语言
作者
Yuri Nakao,Simone Stumpf,Subeida Ahmed,Aisha Naseer,Lorenzo Strappelli
出处
期刊:ACM transactions on interactive intelligent systems
[Association for Computing Machinery]
日期:2022-04-28
卷期号:12 (3): 1-30
被引量:25
摘要
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications. Recent work has started to investigate how humans judge fairness and how to support machine learning experts in making their AI models fairer. Drawing inspiration from an Explainable AI approach called explanatory debugging used in interactive machine learning, our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users without any technical or domain background to identify potential fairness issues and possibly fix them in the context of loan decisions. Through workshops with end-users, we co-designed and implemented a prototype system that allowed end-users to see why predictions were made, and then to change weights on features to “debug” fairness issues. We evaluated the use of this prototype system through an online study. To investigate the implications of diverse human values about fairness around the globe, we also explored how cultural dimensions might play a role in using this prototype. Our results contribute to the design of interfaces to allow end-users to be involved in judging and addressing AI fairness through a human-in-the-loop approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI