构造(python库)
计算机科学
背景(考古学)
累犯
任务(项目管理)
人在回路中
感知
管理科学
工作(物理)
人工智能
风险分析(工程)
运筹学
心理学
数学
经济
工程类
生物
古生物学
神经科学
管理
程序设计语言
机械工程
犯罪学
医学
作者
Mohammad Yaghini,Hoda Heidari,Andreas Krause
出处
期刊:Cornell University - arXiv
日期:2019-11-08
被引量:2
摘要
Despite the recent surge of interest in designing and guaranteeing mathematical formulations of fairness, virtually all existing notions of algorithmic fairness fail to be adaptable to the intricacies and nuances of the decision-making context at hand. We argue that capturing such factors is an inherently human task, as it requires knowledge of the social background in which machine learning tools impact real people's outcomes and a deep understanding of the ramifications of automated decisions for decision subjects and society. In this work, we present a framework to construct a context-dependent mathematical formulation of fairness utilizing people's judgment of fairness. We utilize the theoretical model of Heidari et al. (2019)---which shows that most existing formulations of algorithmic fairness are special cases of economic models of Equality of Opportunity (EOP)---and present a practical human-in-the-loop approach to pinpoint the fairness notion in the EOP family that best captures people's perception of fairness in the given context. To illustrate our framework, we run human-subject experiments designed to learn the parameters of Heidari et al.'s EOP model (including circumstance, desert, and utility) in a hypothetical recidivism decision-making scenario. Our work takes an initial step toward democratizing the formulation of fairness and utilizing human-judgment to tackle a fundamental shortcoming of automated decision-making systems: that the machine on its own is incapable of understanding and processing the human aspects and social context of its decisions.
科研通智能强力驱动
Strongly Powered by AbleSci AI