感知
计算机科学
公平性度量
审查
心理学
政治学
电信
吞吐量
神经科学
法学
无线
作者
Jianlong Zhou,Sunny Verma,Mudit Mittal,Fang Chen
标识
DOI:10.1109/besc53957.2021.9635182
摘要
The adoption of Artificial Intelligence (AI) is currently under scrutiny due to various concerns such as fairness, and how does the fairness of an AI algorithm affects user's trust is much legitimate to pursue. In this regard, we aim to understand the relationship between induced algorithmic fairness and its perception in humans. In particular, we are interested in whether these two are positively correlated and reflect substantive fairness. Furthermore, we also study how does induced algorithmic fairness affects user trust in algorithmic decision making. To understand this, we perform a user study to simulate candidate shortlisting by introduced (manipulating mathematical) fairness in a human resource recruitment setting. Our experimental results demonstrate that different levels of introduced fairness are positively related to human perception of fairness, and simultaneously it is also positively related to user trust in algorithmic decision making. Interestingly, we also found that users are more sensitive to the higher levels of introduced fairness than the lower levels of introduced fairness. Besides, we summarize the theoretical and practical implications of this research with a discussion on nercention of fairness.
科研通智能强力驱动
Strongly Powered by AbleSci AI